00:00:00.000 Started by upstream project "autotest-per-patch" build number 132350 00:00:00.000 originally caused by: 00:00:00.000 Started by user sys_sgci 00:00:00.100 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:05.713 The recommended git tool is: git 00:00:05.714 using credential 00000000-0000-0000-0000-000000000002 00:00:05.716 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:05.730 Fetching changes from the remote Git repository 00:00:05.733 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:05.747 Using shallow fetch with depth 1 00:00:05.747 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:05.747 > git --version # timeout=10 00:00:05.761 > git --version # 'git version 2.39.2' 00:00:05.761 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:05.775 Setting http proxy: proxy-dmz.intel.com:911 00:00:05.775 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:09.487 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:09.501 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:09.515 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:09.515 > git config core.sparsecheckout # timeout=10 00:00:09.527 > git read-tree -mu HEAD # timeout=10 00:00:09.545 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:09.568 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:09.568 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:09.654 [Pipeline] Start of Pipeline 00:00:09.669 [Pipeline] library 00:00:09.671 Loading library shm_lib@master 00:00:09.671 Library shm_lib@master is cached. Copying from home. 00:00:09.693 [Pipeline] node 00:00:09.704 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:09.706 [Pipeline] { 00:00:09.719 [Pipeline] catchError 00:00:09.720 [Pipeline] { 00:00:09.733 [Pipeline] wrap 00:00:09.743 [Pipeline] { 00:00:09.749 [Pipeline] stage 00:00:09.750 [Pipeline] { (Prologue) 00:00:09.981 [Pipeline] sh 00:00:10.264 + logger -p user.info -t JENKINS-CI 00:00:10.285 [Pipeline] echo 00:00:10.287 Node: WFP6 00:00:10.294 [Pipeline] sh 00:00:10.591 [Pipeline] setCustomBuildProperty 00:00:10.601 [Pipeline] echo 00:00:10.602 Cleanup processes 00:00:10.606 [Pipeline] sh 00:00:10.889 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.889 1419359 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:10.899 [Pipeline] sh 00:00:11.179 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:11.179 ++ grep -v 'sudo pgrep' 00:00:11.179 ++ awk '{print $1}' 00:00:11.179 + sudo kill -9 00:00:11.179 + true 00:00:11.192 [Pipeline] cleanWs 00:00:11.201 [WS-CLEANUP] Deleting project workspace... 00:00:11.201 [WS-CLEANUP] Deferred wipeout is used... 00:00:11.207 [WS-CLEANUP] done 00:00:11.210 [Pipeline] setCustomBuildProperty 00:00:11.220 [Pipeline] sh 00:00:11.537 + sudo git config --global --replace-all safe.directory '*' 00:00:11.625 [Pipeline] httpRequest 00:00:11.925 [Pipeline] echo 00:00:11.926 Sorcerer 10.211.164.20 is alive 00:00:11.934 [Pipeline] retry 00:00:11.936 [Pipeline] { 00:00:11.945 [Pipeline] httpRequest 00:00:11.948 HttpMethod: GET 00:00:11.949 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.949 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.956 Response Code: HTTP/1.1 200 OK 00:00:11.956 Success: Status code 200 is in the accepted range: 200,404 00:00:11.956 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:32.537 [Pipeline] } 00:00:32.556 [Pipeline] // retry 00:00:32.564 [Pipeline] sh 00:00:32.848 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:32.863 [Pipeline] httpRequest 00:00:33.210 [Pipeline] echo 00:00:33.211 Sorcerer 10.211.164.20 is alive 00:00:33.219 [Pipeline] retry 00:00:33.220 [Pipeline] { 00:00:33.233 [Pipeline] httpRequest 00:00:33.237 HttpMethod: GET 00:00:33.238 URL: http://10.211.164.20/packages/spdk_6f7b42a3aa135b564062c73e08b93022d5b874d8.tar.gz 00:00:33.238 Sending request to url: http://10.211.164.20/packages/spdk_6f7b42a3aa135b564062c73e08b93022d5b874d8.tar.gz 00:00:33.245 Response Code: HTTP/1.1 200 OK 00:00:33.245 Success: Status code 200 is in the accepted range: 200,404 00:00:33.245 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_6f7b42a3aa135b564062c73e08b93022d5b874d8.tar.gz 00:03:50.073 [Pipeline] } 00:03:50.094 [Pipeline] // retry 00:03:50.102 [Pipeline] sh 00:03:50.386 + tar --no-same-owner -xf spdk_6f7b42a3aa135b564062c73e08b93022d5b874d8.tar.gz 00:03:52.964 [Pipeline] sh 00:03:53.248 + git -C spdk log --oneline -n5 00:03:53.248 6f7b42a3a test/nvmf: Hook nvmf/setup.sh into nvmf/common.sh 00:03:53.248 6fc96a60f test/nvmf: Prepare replacements for the network setup 00:03:53.248 f22e807f1 test/autobuild: bump minimum version of intel-ipsec-mb 00:03:53.248 8d982eda9 dpdk: add adjustments for recent rte_power changes 00:03:53.249 dcc2ca8f3 bdev: fix per_channel data null when bdev_get_iostat with reset option 00:03:53.259 [Pipeline] } 00:03:53.273 [Pipeline] // stage 00:03:53.282 [Pipeline] stage 00:03:53.284 [Pipeline] { (Prepare) 00:03:53.301 [Pipeline] writeFile 00:03:53.316 [Pipeline] sh 00:03:53.598 + logger -p user.info -t JENKINS-CI 00:03:53.611 [Pipeline] sh 00:03:53.895 + logger -p user.info -t JENKINS-CI 00:03:53.908 [Pipeline] sh 00:03:54.191 + cat autorun-spdk.conf 00:03:54.191 SPDK_RUN_FUNCTIONAL_TEST=1 00:03:54.191 SPDK_TEST_NVMF=1 00:03:54.191 SPDK_TEST_NVME_CLI=1 00:03:54.191 SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:54.191 SPDK_TEST_NVMF_NICS=e810 00:03:54.192 SPDK_TEST_VFIOUSER=1 00:03:54.192 SPDK_RUN_UBSAN=1 00:03:54.192 NET_TYPE=phy 00:03:54.199 RUN_NIGHTLY=0 00:03:54.204 [Pipeline] readFile 00:03:54.227 [Pipeline] withEnv 00:03:54.229 [Pipeline] { 00:03:54.242 [Pipeline] sh 00:03:54.527 + set -ex 00:03:54.527 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:03:54.527 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:54.527 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:54.527 ++ SPDK_TEST_NVMF=1 00:03:54.527 ++ SPDK_TEST_NVME_CLI=1 00:03:54.528 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:54.528 ++ SPDK_TEST_NVMF_NICS=e810 00:03:54.528 ++ SPDK_TEST_VFIOUSER=1 00:03:54.528 ++ SPDK_RUN_UBSAN=1 00:03:54.528 ++ NET_TYPE=phy 00:03:54.528 ++ RUN_NIGHTLY=0 00:03:54.528 + case $SPDK_TEST_NVMF_NICS in 00:03:54.528 + DRIVERS=ice 00:03:54.528 + [[ tcp == \r\d\m\a ]] 00:03:54.528 + [[ -n ice ]] 00:03:54.528 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:03:54.528 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:03:54.528 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:03:54.528 rmmod: ERROR: Module irdma is not currently loaded 00:03:54.528 rmmod: ERROR: Module i40iw is not currently loaded 00:03:54.528 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:03:54.528 + true 00:03:54.528 + for D in $DRIVERS 00:03:54.528 + sudo modprobe ice 00:03:54.528 + exit 0 00:03:54.537 [Pipeline] } 00:03:54.551 [Pipeline] // withEnv 00:03:54.556 [Pipeline] } 00:03:54.569 [Pipeline] // stage 00:03:54.578 [Pipeline] catchError 00:03:54.580 [Pipeline] { 00:03:54.593 [Pipeline] timeout 00:03:54.593 Timeout set to expire in 1 hr 0 min 00:03:54.594 [Pipeline] { 00:03:54.607 [Pipeline] stage 00:03:54.609 [Pipeline] { (Tests) 00:03:54.622 [Pipeline] sh 00:03:54.905 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:54.905 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:54.905 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:54.905 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:03:54.905 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:54.905 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:54.905 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:03:54.905 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:54.905 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:03:54.905 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:03:54.906 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:03:54.906 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:03:54.906 + source /etc/os-release 00:03:54.906 ++ NAME='Fedora Linux' 00:03:54.906 ++ VERSION='39 (Cloud Edition)' 00:03:54.906 ++ ID=fedora 00:03:54.906 ++ VERSION_ID=39 00:03:54.906 ++ VERSION_CODENAME= 00:03:54.906 ++ PLATFORM_ID=platform:f39 00:03:54.906 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:03:54.906 ++ ANSI_COLOR='0;38;2;60;110;180' 00:03:54.906 ++ LOGO=fedora-logo-icon 00:03:54.906 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:03:54.906 ++ HOME_URL=https://fedoraproject.org/ 00:03:54.906 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:03:54.906 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:03:54.906 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:03:54.906 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:03:54.906 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:03:54.906 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:03:54.906 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:03:54.906 ++ SUPPORT_END=2024-11-12 00:03:54.906 ++ VARIANT='Cloud Edition' 00:03:54.906 ++ VARIANT_ID=cloud 00:03:54.906 + uname -a 00:03:54.906 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:03:54.906 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:57.445 Hugepages 00:03:57.445 node hugesize free / total 00:03:57.445 node0 1048576kB 0 / 0 00:03:57.445 node0 2048kB 0 / 0 00:03:57.445 node1 1048576kB 0 / 0 00:03:57.445 node1 2048kB 0 / 0 00:03:57.445 00:03:57.445 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:57.445 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:57.445 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:57.445 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:57.445 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:57.445 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:57.445 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:57.445 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:57.445 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:57.445 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:57.445 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:57.445 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:57.445 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:57.445 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:57.445 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:57.445 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:57.445 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:57.445 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:57.445 + rm -f /tmp/spdk-ld-path 00:03:57.445 + source autorun-spdk.conf 00:03:57.445 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:57.445 ++ SPDK_TEST_NVMF=1 00:03:57.445 ++ SPDK_TEST_NVME_CLI=1 00:03:57.445 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:57.445 ++ SPDK_TEST_NVMF_NICS=e810 00:03:57.445 ++ SPDK_TEST_VFIOUSER=1 00:03:57.445 ++ SPDK_RUN_UBSAN=1 00:03:57.445 ++ NET_TYPE=phy 00:03:57.445 ++ RUN_NIGHTLY=0 00:03:57.445 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:03:57.445 + [[ -n '' ]] 00:03:57.445 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:57.445 + for M in /var/spdk/build-*-manifest.txt 00:03:57.445 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:03:57.445 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:57.445 + for M in /var/spdk/build-*-manifest.txt 00:03:57.445 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:03:57.445 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:57.445 + for M in /var/spdk/build-*-manifest.txt 00:03:57.445 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:03:57.445 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:03:57.445 ++ uname 00:03:57.445 + [[ Linux == \L\i\n\u\x ]] 00:03:57.445 + sudo dmesg -T 00:03:57.705 + sudo dmesg --clear 00:03:57.705 + dmesg_pid=1420824 00:03:57.705 + [[ Fedora Linux == FreeBSD ]] 00:03:57.705 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:57.705 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:57.705 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:03:57.705 + [[ -x /usr/src/fio-static/fio ]] 00:03:57.705 + sudo dmesg -Tw 00:03:57.705 + export FIO_BIN=/usr/src/fio-static/fio 00:03:57.705 + FIO_BIN=/usr/src/fio-static/fio 00:03:57.705 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:03:57.705 + [[ ! -v VFIO_QEMU_BIN ]] 00:03:57.705 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:03:57.705 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:57.705 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:03:57.705 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:03:57.705 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:57.705 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:03:57.705 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:57.705 08:01:11 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:57.705 08:01:11 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:57.705 08:01:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:03:57.705 08:01:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:03:57.705 08:01:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:03:57.705 08:01:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:03:57.705 08:01:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:03:57.705 08:01:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:03:57.705 08:01:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:03:57.705 08:01:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:03:57.705 08:01:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:03:57.705 08:01:11 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:03:57.705 08:01:11 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:57.705 08:01:11 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:03:57.705 08:01:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:57.705 08:01:11 -- scripts/common.sh@15 -- $ shopt -s extglob 00:03:57.705 08:01:11 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:03:57.705 08:01:11 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:57.705 08:01:11 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:57.705 08:01:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.705 08:01:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.705 08:01:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.705 08:01:11 -- paths/export.sh@5 -- $ export PATH 00:03:57.705 08:01:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:57.705 08:01:11 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:57.705 08:01:11 -- common/autobuild_common.sh@493 -- $ date +%s 00:03:57.705 08:01:11 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732086071.XXXXXX 00:03:57.705 08:01:11 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732086071.9N0qFA 00:03:57.705 08:01:11 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:03:57.705 08:01:11 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:03:57.705 08:01:11 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:03:57.705 08:01:11 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:03:57.705 08:01:11 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:03:57.705 08:01:11 -- common/autobuild_common.sh@509 -- $ get_config_params 00:03:57.705 08:01:11 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:03:57.705 08:01:11 -- common/autotest_common.sh@10 -- $ set +x 00:03:57.705 08:01:11 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:03:57.705 08:01:11 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:03:57.705 08:01:11 -- pm/common@17 -- $ local monitor 00:03:57.705 08:01:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.705 08:01:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.705 08:01:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.705 08:01:11 -- pm/common@21 -- $ date +%s 00:03:57.705 08:01:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:57.705 08:01:11 -- pm/common@21 -- $ date +%s 00:03:57.705 08:01:11 -- pm/common@25 -- $ sleep 1 00:03:57.705 08:01:11 -- pm/common@21 -- $ date +%s 00:03:57.705 08:01:11 -- pm/common@21 -- $ date +%s 00:03:57.705 08:01:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732086071 00:03:57.705 08:01:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732086071 00:03:57.706 08:01:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732086071 00:03:57.706 08:01:11 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732086071 00:03:57.965 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732086071_collect-cpu-load.pm.log 00:03:57.965 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732086071_collect-vmstat.pm.log 00:03:57.965 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732086071_collect-cpu-temp.pm.log 00:03:57.965 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732086071_collect-bmc-pm.bmc.pm.log 00:03:58.904 08:01:12 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:03:58.904 08:01:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:03:58.904 08:01:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:03:58.904 08:01:12 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:58.904 08:01:12 -- spdk/autobuild.sh@16 -- $ date -u 00:03:58.904 Wed Nov 20 07:01:12 AM UTC 2024 00:03:58.904 08:01:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:03:58.904 v25.01-pre-201-g6f7b42a3a 00:03:58.904 08:01:12 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:03:58.904 08:01:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:03:58.904 08:01:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:03:58.904 08:01:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:58.904 08:01:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:58.904 08:01:12 -- common/autotest_common.sh@10 -- $ set +x 00:03:58.904 ************************************ 00:03:58.904 START TEST ubsan 00:03:58.904 ************************************ 00:03:58.904 08:01:12 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:03:58.904 using ubsan 00:03:58.904 00:03:58.904 real 0m0.000s 00:03:58.904 user 0m0.000s 00:03:58.904 sys 0m0.000s 00:03:58.904 08:01:12 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:58.904 08:01:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:03:58.904 ************************************ 00:03:58.904 END TEST ubsan 00:03:58.904 ************************************ 00:03:58.904 08:01:12 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:03:58.904 08:01:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:03:58.904 08:01:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:03:58.904 08:01:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:03:58.904 08:01:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:03:58.904 08:01:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:03:58.904 08:01:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:03:58.904 08:01:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:03:58.904 08:01:12 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:03:59.163 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:03:59.163 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:03:59.423 Using 'verbs' RDMA provider 00:04:12.207 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:04:24.416 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:04:24.416 Creating mk/config.mk...done. 00:04:24.416 Creating mk/cc.flags.mk...done. 00:04:24.416 Type 'make' to build. 00:04:24.416 08:01:38 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:04:24.416 08:01:38 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:04:24.416 08:01:38 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:04:24.416 08:01:38 -- common/autotest_common.sh@10 -- $ set +x 00:04:24.416 ************************************ 00:04:24.416 START TEST make 00:04:24.416 ************************************ 00:04:24.416 08:01:38 make -- common/autotest_common.sh@1129 -- $ make -j96 00:04:24.986 make[1]: Nothing to be done for 'all'. 00:04:26.374 The Meson build system 00:04:26.374 Version: 1.5.0 00:04:26.374 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:04:26.374 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:26.374 Build type: native build 00:04:26.374 Project name: libvfio-user 00:04:26.374 Project version: 0.0.1 00:04:26.374 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:26.374 C linker for the host machine: cc ld.bfd 2.40-14 00:04:26.374 Host machine cpu family: x86_64 00:04:26.374 Host machine cpu: x86_64 00:04:26.374 Run-time dependency threads found: YES 00:04:26.374 Library dl found: YES 00:04:26.374 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:26.374 Run-time dependency json-c found: YES 0.17 00:04:26.374 Run-time dependency cmocka found: YES 1.1.7 00:04:26.374 Program pytest-3 found: NO 00:04:26.374 Program flake8 found: NO 00:04:26.374 Program misspell-fixer found: NO 00:04:26.374 Program restructuredtext-lint found: NO 00:04:26.375 Program valgrind found: YES (/usr/bin/valgrind) 00:04:26.375 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:26.375 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:26.375 Compiler for C supports arguments -Wwrite-strings: YES 00:04:26.375 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:26.375 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:04:26.375 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:04:26.375 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:04:26.375 Build targets in project: 8 00:04:26.375 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:04:26.375 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:04:26.375 00:04:26.375 libvfio-user 0.0.1 00:04:26.375 00:04:26.375 User defined options 00:04:26.375 buildtype : debug 00:04:26.375 default_library: shared 00:04:26.375 libdir : /usr/local/lib 00:04:26.375 00:04:26.375 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:26.633 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:26.893 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:04:26.893 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:04:26.893 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:04:26.893 [4/37] Compiling C object samples/lspci.p/lspci.c.o 00:04:26.893 [5/37] Compiling C object samples/null.p/null.c.o 00:04:26.893 [6/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:04:26.893 [7/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:04:26.893 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:04:26.893 [9/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:04:26.893 [10/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:04:26.893 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:04:26.893 [12/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:04:26.893 [13/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:04:26.893 [14/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:04:26.893 [15/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:04:26.893 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:04:26.893 [17/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:04:26.893 [18/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:04:26.893 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:04:26.893 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:04:26.893 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:04:26.893 [22/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:04:26.893 [23/37] Compiling C object samples/server.p/server.c.o 00:04:26.893 [24/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:04:26.893 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:04:26.893 [26/37] Compiling C object samples/client.p/client.c.o 00:04:26.893 [27/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:04:26.893 [28/37] Linking target samples/client 00:04:27.153 [29/37] Linking target test/unit_tests 00:04:27.153 [30/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:04:27.153 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:04:27.153 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:04:27.413 [33/37] Linking target samples/shadow_ioeventfd_server 00:04:27.413 [34/37] Linking target samples/gpio-pci-idio-16 00:04:27.413 [35/37] Linking target samples/null 00:04:27.413 [36/37] Linking target samples/server 00:04:27.413 [37/37] Linking target samples/lspci 00:04:27.413 INFO: autodetecting backend as ninja 00:04:27.413 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:27.413 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:04:27.673 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:04:27.673 ninja: no work to do. 00:04:32.953 The Meson build system 00:04:32.953 Version: 1.5.0 00:04:32.953 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:04:32.953 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:04:32.953 Build type: native build 00:04:32.953 Program cat found: YES (/usr/bin/cat) 00:04:32.953 Project name: DPDK 00:04:32.953 Project version: 24.03.0 00:04:32.953 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:04:32.953 C linker for the host machine: cc ld.bfd 2.40-14 00:04:32.953 Host machine cpu family: x86_64 00:04:32.953 Host machine cpu: x86_64 00:04:32.953 Message: ## Building in Developer Mode ## 00:04:32.953 Program pkg-config found: YES (/usr/bin/pkg-config) 00:04:32.953 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:04:32.953 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:04:32.953 Program python3 found: YES (/usr/bin/python3) 00:04:32.953 Program cat found: YES (/usr/bin/cat) 00:04:32.953 Compiler for C supports arguments -march=native: YES 00:04:32.953 Checking for size of "void *" : 8 00:04:32.953 Checking for size of "void *" : 8 (cached) 00:04:32.953 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:04:32.953 Library m found: YES 00:04:32.953 Library numa found: YES 00:04:32.953 Has header "numaif.h" : YES 00:04:32.953 Library fdt found: NO 00:04:32.953 Library execinfo found: NO 00:04:32.953 Has header "execinfo.h" : YES 00:04:32.953 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:04:32.953 Run-time dependency libarchive found: NO (tried pkgconfig) 00:04:32.953 Run-time dependency libbsd found: NO (tried pkgconfig) 00:04:32.953 Run-time dependency jansson found: NO (tried pkgconfig) 00:04:32.953 Run-time dependency openssl found: YES 3.1.1 00:04:32.953 Run-time dependency libpcap found: YES 1.10.4 00:04:32.953 Has header "pcap.h" with dependency libpcap: YES 00:04:32.953 Compiler for C supports arguments -Wcast-qual: YES 00:04:32.953 Compiler for C supports arguments -Wdeprecated: YES 00:04:32.953 Compiler for C supports arguments -Wformat: YES 00:04:32.953 Compiler for C supports arguments -Wformat-nonliteral: NO 00:04:32.953 Compiler for C supports arguments -Wformat-security: NO 00:04:32.953 Compiler for C supports arguments -Wmissing-declarations: YES 00:04:32.953 Compiler for C supports arguments -Wmissing-prototypes: YES 00:04:32.953 Compiler for C supports arguments -Wnested-externs: YES 00:04:32.953 Compiler for C supports arguments -Wold-style-definition: YES 00:04:32.953 Compiler for C supports arguments -Wpointer-arith: YES 00:04:32.953 Compiler for C supports arguments -Wsign-compare: YES 00:04:32.953 Compiler for C supports arguments -Wstrict-prototypes: YES 00:04:32.953 Compiler for C supports arguments -Wundef: YES 00:04:32.953 Compiler for C supports arguments -Wwrite-strings: YES 00:04:32.953 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:04:32.953 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:04:32.953 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:04:32.953 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:04:32.953 Program objdump found: YES (/usr/bin/objdump) 00:04:32.953 Compiler for C supports arguments -mavx512f: YES 00:04:32.953 Checking if "AVX512 checking" compiles: YES 00:04:32.953 Fetching value of define "__SSE4_2__" : 1 00:04:32.953 Fetching value of define "__AES__" : 1 00:04:32.953 Fetching value of define "__AVX__" : 1 00:04:32.953 Fetching value of define "__AVX2__" : 1 00:04:32.953 Fetching value of define "__AVX512BW__" : 1 00:04:32.953 Fetching value of define "__AVX512CD__" : 1 00:04:32.953 Fetching value of define "__AVX512DQ__" : 1 00:04:32.953 Fetching value of define "__AVX512F__" : 1 00:04:32.953 Fetching value of define "__AVX512VL__" : 1 00:04:32.953 Fetching value of define "__PCLMUL__" : 1 00:04:32.953 Fetching value of define "__RDRND__" : 1 00:04:32.953 Fetching value of define "__RDSEED__" : 1 00:04:32.953 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:04:32.953 Fetching value of define "__znver1__" : (undefined) 00:04:32.953 Fetching value of define "__znver2__" : (undefined) 00:04:32.953 Fetching value of define "__znver3__" : (undefined) 00:04:32.953 Fetching value of define "__znver4__" : (undefined) 00:04:32.953 Compiler for C supports arguments -Wno-format-truncation: YES 00:04:32.953 Message: lib/log: Defining dependency "log" 00:04:32.953 Message: lib/kvargs: Defining dependency "kvargs" 00:04:32.953 Message: lib/telemetry: Defining dependency "telemetry" 00:04:32.953 Checking for function "getentropy" : NO 00:04:32.953 Message: lib/eal: Defining dependency "eal" 00:04:32.953 Message: lib/ring: Defining dependency "ring" 00:04:32.953 Message: lib/rcu: Defining dependency "rcu" 00:04:32.953 Message: lib/mempool: Defining dependency "mempool" 00:04:32.953 Message: lib/mbuf: Defining dependency "mbuf" 00:04:32.953 Fetching value of define "__PCLMUL__" : 1 (cached) 00:04:32.953 Fetching value of define "__AVX512F__" : 1 (cached) 00:04:32.953 Fetching value of define "__AVX512BW__" : 1 (cached) 00:04:32.953 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:04:32.953 Fetching value of define "__AVX512VL__" : 1 (cached) 00:04:32.953 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:04:32.953 Compiler for C supports arguments -mpclmul: YES 00:04:32.953 Compiler for C supports arguments -maes: YES 00:04:32.953 Compiler for C supports arguments -mavx512f: YES (cached) 00:04:32.953 Compiler for C supports arguments -mavx512bw: YES 00:04:32.953 Compiler for C supports arguments -mavx512dq: YES 00:04:32.953 Compiler for C supports arguments -mavx512vl: YES 00:04:32.953 Compiler for C supports arguments -mvpclmulqdq: YES 00:04:32.953 Compiler for C supports arguments -mavx2: YES 00:04:32.953 Compiler for C supports arguments -mavx: YES 00:04:32.953 Message: lib/net: Defining dependency "net" 00:04:32.953 Message: lib/meter: Defining dependency "meter" 00:04:32.953 Message: lib/ethdev: Defining dependency "ethdev" 00:04:32.953 Message: lib/pci: Defining dependency "pci" 00:04:32.953 Message: lib/cmdline: Defining dependency "cmdline" 00:04:32.953 Message: lib/hash: Defining dependency "hash" 00:04:32.953 Message: lib/timer: Defining dependency "timer" 00:04:32.953 Message: lib/compressdev: Defining dependency "compressdev" 00:04:32.953 Message: lib/cryptodev: Defining dependency "cryptodev" 00:04:32.953 Message: lib/dmadev: Defining dependency "dmadev" 00:04:32.953 Compiler for C supports arguments -Wno-cast-qual: YES 00:04:32.953 Message: lib/power: Defining dependency "power" 00:04:32.953 Message: lib/reorder: Defining dependency "reorder" 00:04:32.953 Message: lib/security: Defining dependency "security" 00:04:32.953 Has header "linux/userfaultfd.h" : YES 00:04:32.953 Has header "linux/vduse.h" : YES 00:04:32.953 Message: lib/vhost: Defining dependency "vhost" 00:04:32.953 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:04:32.953 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:04:32.953 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:04:32.953 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:04:32.953 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:04:32.953 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:04:32.953 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:04:32.953 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:04:32.953 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:04:32.953 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:04:32.953 Program doxygen found: YES (/usr/local/bin/doxygen) 00:04:32.953 Configuring doxy-api-html.conf using configuration 00:04:32.953 Configuring doxy-api-man.conf using configuration 00:04:32.953 Program mandb found: YES (/usr/bin/mandb) 00:04:32.953 Program sphinx-build found: NO 00:04:32.953 Configuring rte_build_config.h using configuration 00:04:32.953 Message: 00:04:32.953 ================= 00:04:32.953 Applications Enabled 00:04:32.953 ================= 00:04:32.953 00:04:32.953 apps: 00:04:32.953 00:04:32.953 00:04:32.953 Message: 00:04:32.953 ================= 00:04:32.953 Libraries Enabled 00:04:32.953 ================= 00:04:32.953 00:04:32.953 libs: 00:04:32.953 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:04:32.953 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:04:32.953 cryptodev, dmadev, power, reorder, security, vhost, 00:04:32.953 00:04:32.953 Message: 00:04:32.953 =============== 00:04:32.953 Drivers Enabled 00:04:32.953 =============== 00:04:32.953 00:04:32.953 common: 00:04:32.953 00:04:32.953 bus: 00:04:32.953 pci, vdev, 00:04:32.953 mempool: 00:04:32.953 ring, 00:04:32.953 dma: 00:04:32.953 00:04:32.953 net: 00:04:32.953 00:04:32.953 crypto: 00:04:32.954 00:04:32.954 compress: 00:04:32.954 00:04:32.954 vdpa: 00:04:32.954 00:04:32.954 00:04:32.954 Message: 00:04:32.954 ================= 00:04:32.954 Content Skipped 00:04:32.954 ================= 00:04:32.954 00:04:32.954 apps: 00:04:32.954 dumpcap: explicitly disabled via build config 00:04:32.954 graph: explicitly disabled via build config 00:04:32.954 pdump: explicitly disabled via build config 00:04:32.954 proc-info: explicitly disabled via build config 00:04:32.954 test-acl: explicitly disabled via build config 00:04:32.954 test-bbdev: explicitly disabled via build config 00:04:32.954 test-cmdline: explicitly disabled via build config 00:04:32.954 test-compress-perf: explicitly disabled via build config 00:04:32.954 test-crypto-perf: explicitly disabled via build config 00:04:32.954 test-dma-perf: explicitly disabled via build config 00:04:32.954 test-eventdev: explicitly disabled via build config 00:04:32.954 test-fib: explicitly disabled via build config 00:04:32.954 test-flow-perf: explicitly disabled via build config 00:04:32.954 test-gpudev: explicitly disabled via build config 00:04:32.954 test-mldev: explicitly disabled via build config 00:04:32.954 test-pipeline: explicitly disabled via build config 00:04:32.954 test-pmd: explicitly disabled via build config 00:04:32.954 test-regex: explicitly disabled via build config 00:04:32.954 test-sad: explicitly disabled via build config 00:04:32.954 test-security-perf: explicitly disabled via build config 00:04:32.954 00:04:32.954 libs: 00:04:32.954 argparse: explicitly disabled via build config 00:04:32.954 metrics: explicitly disabled via build config 00:04:32.954 acl: explicitly disabled via build config 00:04:32.954 bbdev: explicitly disabled via build config 00:04:32.954 bitratestats: explicitly disabled via build config 00:04:32.954 bpf: explicitly disabled via build config 00:04:32.954 cfgfile: explicitly disabled via build config 00:04:32.954 distributor: explicitly disabled via build config 00:04:32.954 efd: explicitly disabled via build config 00:04:32.954 eventdev: explicitly disabled via build config 00:04:32.954 dispatcher: explicitly disabled via build config 00:04:32.954 gpudev: explicitly disabled via build config 00:04:32.954 gro: explicitly disabled via build config 00:04:32.954 gso: explicitly disabled via build config 00:04:32.954 ip_frag: explicitly disabled via build config 00:04:32.954 jobstats: explicitly disabled via build config 00:04:32.954 latencystats: explicitly disabled via build config 00:04:32.954 lpm: explicitly disabled via build config 00:04:32.954 member: explicitly disabled via build config 00:04:32.954 pcapng: explicitly disabled via build config 00:04:32.954 rawdev: explicitly disabled via build config 00:04:32.954 regexdev: explicitly disabled via build config 00:04:32.954 mldev: explicitly disabled via build config 00:04:32.954 rib: explicitly disabled via build config 00:04:32.954 sched: explicitly disabled via build config 00:04:32.954 stack: explicitly disabled via build config 00:04:32.954 ipsec: explicitly disabled via build config 00:04:32.954 pdcp: explicitly disabled via build config 00:04:32.954 fib: explicitly disabled via build config 00:04:32.954 port: explicitly disabled via build config 00:04:32.954 pdump: explicitly disabled via build config 00:04:32.954 table: explicitly disabled via build config 00:04:32.954 pipeline: explicitly disabled via build config 00:04:32.954 graph: explicitly disabled via build config 00:04:32.954 node: explicitly disabled via build config 00:04:32.954 00:04:32.954 drivers: 00:04:32.954 common/cpt: not in enabled drivers build config 00:04:32.954 common/dpaax: not in enabled drivers build config 00:04:32.954 common/iavf: not in enabled drivers build config 00:04:32.954 common/idpf: not in enabled drivers build config 00:04:32.954 common/ionic: not in enabled drivers build config 00:04:32.954 common/mvep: not in enabled drivers build config 00:04:32.954 common/octeontx: not in enabled drivers build config 00:04:32.954 bus/auxiliary: not in enabled drivers build config 00:04:32.954 bus/cdx: not in enabled drivers build config 00:04:32.954 bus/dpaa: not in enabled drivers build config 00:04:32.954 bus/fslmc: not in enabled drivers build config 00:04:32.954 bus/ifpga: not in enabled drivers build config 00:04:32.954 bus/platform: not in enabled drivers build config 00:04:32.954 bus/uacce: not in enabled drivers build config 00:04:32.954 bus/vmbus: not in enabled drivers build config 00:04:32.954 common/cnxk: not in enabled drivers build config 00:04:32.954 common/mlx5: not in enabled drivers build config 00:04:32.954 common/nfp: not in enabled drivers build config 00:04:32.954 common/nitrox: not in enabled drivers build config 00:04:32.954 common/qat: not in enabled drivers build config 00:04:32.954 common/sfc_efx: not in enabled drivers build config 00:04:32.954 mempool/bucket: not in enabled drivers build config 00:04:32.954 mempool/cnxk: not in enabled drivers build config 00:04:32.954 mempool/dpaa: not in enabled drivers build config 00:04:32.954 mempool/dpaa2: not in enabled drivers build config 00:04:32.954 mempool/octeontx: not in enabled drivers build config 00:04:32.954 mempool/stack: not in enabled drivers build config 00:04:32.954 dma/cnxk: not in enabled drivers build config 00:04:32.954 dma/dpaa: not in enabled drivers build config 00:04:32.954 dma/dpaa2: not in enabled drivers build config 00:04:32.954 dma/hisilicon: not in enabled drivers build config 00:04:32.954 dma/idxd: not in enabled drivers build config 00:04:32.954 dma/ioat: not in enabled drivers build config 00:04:32.954 dma/skeleton: not in enabled drivers build config 00:04:32.954 net/af_packet: not in enabled drivers build config 00:04:32.954 net/af_xdp: not in enabled drivers build config 00:04:32.954 net/ark: not in enabled drivers build config 00:04:32.954 net/atlantic: not in enabled drivers build config 00:04:32.954 net/avp: not in enabled drivers build config 00:04:32.954 net/axgbe: not in enabled drivers build config 00:04:32.954 net/bnx2x: not in enabled drivers build config 00:04:32.954 net/bnxt: not in enabled drivers build config 00:04:32.954 net/bonding: not in enabled drivers build config 00:04:32.954 net/cnxk: not in enabled drivers build config 00:04:32.954 net/cpfl: not in enabled drivers build config 00:04:32.954 net/cxgbe: not in enabled drivers build config 00:04:32.954 net/dpaa: not in enabled drivers build config 00:04:32.954 net/dpaa2: not in enabled drivers build config 00:04:32.954 net/e1000: not in enabled drivers build config 00:04:32.954 net/ena: not in enabled drivers build config 00:04:32.954 net/enetc: not in enabled drivers build config 00:04:32.954 net/enetfec: not in enabled drivers build config 00:04:32.954 net/enic: not in enabled drivers build config 00:04:32.954 net/failsafe: not in enabled drivers build config 00:04:32.954 net/fm10k: not in enabled drivers build config 00:04:32.954 net/gve: not in enabled drivers build config 00:04:32.954 net/hinic: not in enabled drivers build config 00:04:32.954 net/hns3: not in enabled drivers build config 00:04:32.954 net/i40e: not in enabled drivers build config 00:04:32.954 net/iavf: not in enabled drivers build config 00:04:32.954 net/ice: not in enabled drivers build config 00:04:32.954 net/idpf: not in enabled drivers build config 00:04:32.954 net/igc: not in enabled drivers build config 00:04:32.954 net/ionic: not in enabled drivers build config 00:04:32.954 net/ipn3ke: not in enabled drivers build config 00:04:32.954 net/ixgbe: not in enabled drivers build config 00:04:32.954 net/mana: not in enabled drivers build config 00:04:32.954 net/memif: not in enabled drivers build config 00:04:32.954 net/mlx4: not in enabled drivers build config 00:04:32.954 net/mlx5: not in enabled drivers build config 00:04:32.954 net/mvneta: not in enabled drivers build config 00:04:32.954 net/mvpp2: not in enabled drivers build config 00:04:32.954 net/netvsc: not in enabled drivers build config 00:04:32.954 net/nfb: not in enabled drivers build config 00:04:32.954 net/nfp: not in enabled drivers build config 00:04:32.954 net/ngbe: not in enabled drivers build config 00:04:32.954 net/null: not in enabled drivers build config 00:04:32.954 net/octeontx: not in enabled drivers build config 00:04:32.954 net/octeon_ep: not in enabled drivers build config 00:04:32.954 net/pcap: not in enabled drivers build config 00:04:32.954 net/pfe: not in enabled drivers build config 00:04:32.954 net/qede: not in enabled drivers build config 00:04:32.954 net/ring: not in enabled drivers build config 00:04:32.954 net/sfc: not in enabled drivers build config 00:04:32.954 net/softnic: not in enabled drivers build config 00:04:32.954 net/tap: not in enabled drivers build config 00:04:32.954 net/thunderx: not in enabled drivers build config 00:04:32.954 net/txgbe: not in enabled drivers build config 00:04:32.954 net/vdev_netvsc: not in enabled drivers build config 00:04:32.954 net/vhost: not in enabled drivers build config 00:04:32.954 net/virtio: not in enabled drivers build config 00:04:32.954 net/vmxnet3: not in enabled drivers build config 00:04:32.954 raw/*: missing internal dependency, "rawdev" 00:04:32.954 crypto/armv8: not in enabled drivers build config 00:04:32.954 crypto/bcmfs: not in enabled drivers build config 00:04:32.954 crypto/caam_jr: not in enabled drivers build config 00:04:32.954 crypto/ccp: not in enabled drivers build config 00:04:32.954 crypto/cnxk: not in enabled drivers build config 00:04:32.954 crypto/dpaa_sec: not in enabled drivers build config 00:04:32.954 crypto/dpaa2_sec: not in enabled drivers build config 00:04:32.954 crypto/ipsec_mb: not in enabled drivers build config 00:04:32.954 crypto/mlx5: not in enabled drivers build config 00:04:32.954 crypto/mvsam: not in enabled drivers build config 00:04:32.954 crypto/nitrox: not in enabled drivers build config 00:04:32.954 crypto/null: not in enabled drivers build config 00:04:32.954 crypto/octeontx: not in enabled drivers build config 00:04:32.954 crypto/openssl: not in enabled drivers build config 00:04:32.954 crypto/scheduler: not in enabled drivers build config 00:04:32.955 crypto/uadk: not in enabled drivers build config 00:04:32.955 crypto/virtio: not in enabled drivers build config 00:04:32.955 compress/isal: not in enabled drivers build config 00:04:32.955 compress/mlx5: not in enabled drivers build config 00:04:32.955 compress/nitrox: not in enabled drivers build config 00:04:32.955 compress/octeontx: not in enabled drivers build config 00:04:32.955 compress/zlib: not in enabled drivers build config 00:04:32.955 regex/*: missing internal dependency, "regexdev" 00:04:32.955 ml/*: missing internal dependency, "mldev" 00:04:32.955 vdpa/ifc: not in enabled drivers build config 00:04:32.955 vdpa/mlx5: not in enabled drivers build config 00:04:32.955 vdpa/nfp: not in enabled drivers build config 00:04:32.955 vdpa/sfc: not in enabled drivers build config 00:04:32.955 event/*: missing internal dependency, "eventdev" 00:04:32.955 baseband/*: missing internal dependency, "bbdev" 00:04:32.955 gpu/*: missing internal dependency, "gpudev" 00:04:32.955 00:04:32.955 00:04:33.214 Build targets in project: 85 00:04:33.214 00:04:33.214 DPDK 24.03.0 00:04:33.214 00:04:33.214 User defined options 00:04:33.214 buildtype : debug 00:04:33.214 default_library : shared 00:04:33.214 libdir : lib 00:04:33.214 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:04:33.214 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:04:33.214 c_link_args : 00:04:33.214 cpu_instruction_set: native 00:04:33.214 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:04:33.214 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:04:33.214 enable_docs : false 00:04:33.214 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:04:33.214 enable_kmods : false 00:04:33.214 max_lcores : 128 00:04:33.214 tests : false 00:04:33.214 00:04:33.214 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:04:33.788 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:04:33.788 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:04:33.788 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:04:33.788 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:04:33.788 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:04:33.788 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:04:33.788 [6/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:04:33.788 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:04:33.788 [8/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:04:33.788 [9/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:04:33.788 [10/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:04:33.788 [11/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:04:33.788 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:04:33.788 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:04:33.788 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:04:33.788 [15/268] Linking static target lib/librte_kvargs.a 00:04:33.788 [16/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:04:33.788 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:04:33.788 [18/268] Linking static target lib/librte_log.a 00:04:33.788 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:04:34.049 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:04:34.049 [21/268] Linking static target lib/librte_pci.a 00:04:34.049 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:04:34.049 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:04:34.049 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:04:34.049 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:04:34.308 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:04:34.308 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:04:34.308 [28/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:04:34.308 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:04:34.308 [30/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:04:34.308 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:04:34.308 [32/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:04:34.308 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:04:34.308 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:04:34.308 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:04:34.308 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:04:34.308 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:04:34.308 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:04:34.308 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:04:34.308 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:04:34.308 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:04:34.308 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:04:34.308 [43/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:04:34.308 [44/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:04:34.308 [45/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:04:34.308 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:04:34.308 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:04:34.308 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:04:34.308 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:04:34.308 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:04:34.308 [51/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:04:34.308 [52/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:04:34.308 [53/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:04:34.308 [54/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:04:34.308 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:04:34.308 [56/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:04:34.308 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:04:34.308 [58/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:04:34.308 [59/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:04:34.308 [60/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:04:34.308 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:04:34.308 [62/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:04:34.308 [63/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:04:34.308 [64/268] Linking static target lib/librte_meter.a 00:04:34.308 [65/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:04:34.308 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:04:34.308 [67/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:04:34.308 [68/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:04:34.308 [69/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:04:34.308 [70/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:04:34.308 [71/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:04:34.308 [72/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:04:34.308 [73/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.308 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:04:34.308 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:04:34.308 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:04:34.308 [77/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:04:34.308 [78/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:04:34.308 [79/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:04:34.308 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:04:34.308 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:04:34.308 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:04:34.308 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:04:34.308 [84/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:04:34.308 [85/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:04:34.308 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:04:34.308 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:04:34.308 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:04:34.308 [89/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:04:34.308 [90/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:04:34.308 [91/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:04:34.308 [92/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:04:34.308 [93/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:04:34.308 [94/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:04:34.308 [95/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:04:34.308 [96/268] Linking static target lib/librte_telemetry.a 00:04:34.308 [97/268] Linking static target lib/librte_ring.a 00:04:34.308 [98/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:04:34.309 [99/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:04:34.309 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:04:34.566 [101/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:04:34.566 [102/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:04:34.566 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:04:34.566 [104/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.566 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:04:34.566 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:04:34.566 [107/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:04:34.566 [108/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:04:34.566 [109/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:04:34.566 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:04:34.566 [111/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:04:34.566 [112/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:04:34.566 [113/268] Linking static target lib/librte_rcu.a 00:04:34.566 [114/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:04:34.566 [115/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:04:34.566 [116/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:04:34.566 [117/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:04:34.566 [118/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:04:34.566 [119/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:04:34.566 [120/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:04:34.566 [121/268] Linking static target lib/librte_net.a 00:04:34.566 [122/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:04:34.566 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:04:34.566 [124/268] Linking static target lib/librte_mempool.a 00:04:34.566 [125/268] Linking static target lib/librte_eal.a 00:04:34.566 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:04:34.566 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:04:34.566 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:04:34.567 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:04:34.567 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:04:34.567 [131/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.567 [132/268] Linking static target lib/librte_cmdline.a 00:04:34.567 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:04:34.567 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:04:34.567 [135/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:04:34.567 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:04:34.567 [137/268] Linking static target lib/librte_mbuf.a 00:04:34.567 [138/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:04:34.567 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:04:34.825 [140/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.825 [141/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.825 [142/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:04:34.825 [143/268] Linking target lib/librte_log.so.24.1 00:04:34.825 [144/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:04:34.825 [145/268] Linking static target lib/librte_timer.a 00:04:34.825 [146/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:04:34.825 [147/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:04:34.825 [148/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:04:34.825 [149/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:04:34.825 [150/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:04:34.825 [151/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.825 [152/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:04:34.825 [153/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.825 [154/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:04:34.825 [155/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:04:34.825 [156/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:04:34.825 [157/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:04:34.825 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:04:34.825 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:04:34.825 [160/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:04:34.825 [161/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:04:34.825 [162/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:04:34.825 [163/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:04:34.825 [164/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:04:34.825 [165/268] Linking static target lib/librte_reorder.a 00:04:34.825 [166/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:04:34.825 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:04:34.825 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:04:34.825 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:04:34.825 [170/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:04:34.825 [171/268] Linking target lib/librte_kvargs.so.24.1 00:04:34.825 [172/268] Linking target lib/librte_telemetry.so.24.1 00:04:34.825 [173/268] Linking static target lib/librte_security.a 00:04:34.825 [174/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:04:34.825 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:04:34.825 [176/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:04:35.084 [177/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:04:35.084 [178/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:04:35.084 [179/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:04:35.084 [180/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:04:35.084 [181/268] Linking static target lib/librte_power.a 00:04:35.084 [182/268] Linking static target lib/librte_dmadev.a 00:04:35.084 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:04:35.084 [184/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:04:35.084 [185/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:04:35.084 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:04:35.084 [187/268] Linking static target lib/librte_compressdev.a 00:04:35.084 [188/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:04:35.084 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:04:35.084 [190/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:04:35.084 [191/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:04:35.084 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:04:35.084 [193/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:04:35.085 [194/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:35.085 [195/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:04:35.085 [196/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:04:35.085 [197/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:04:35.085 [198/268] Linking static target drivers/librte_bus_vdev.a 00:04:35.085 [199/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:04:35.085 [200/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:04:35.085 [201/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:04:35.085 [202/268] Linking static target lib/librte_hash.a 00:04:35.085 [203/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.344 [204/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:04:35.344 [205/268] Linking static target lib/librte_cryptodev.a 00:04:35.344 [206/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:04:35.344 [207/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:04:35.344 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:35.344 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:04:35.344 [210/268] Linking static target drivers/librte_bus_pci.a 00:04:35.344 [211/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.344 [212/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.344 [213/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:35.344 [214/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:04:35.344 [215/268] Linking static target drivers/librte_mempool_ring.a 00:04:35.344 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.344 [217/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.602 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.602 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:04:35.602 [220/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.602 [221/268] Linking static target lib/librte_ethdev.a 00:04:35.602 [222/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:04:35.602 [223/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.861 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:04:35.861 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.119 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:04:36.119 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:04:37.056 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:04:37.056 [229/268] Linking static target lib/librte_vhost.a 00:04:37.056 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:38.957 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:04:44.230 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:04:44.488 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:04:44.488 [234/268] Linking target lib/librte_eal.so.24.1 00:04:44.746 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:04:44.746 [236/268] Linking target lib/librte_ring.so.24.1 00:04:44.746 [237/268] Linking target lib/librte_timer.so.24.1 00:04:44.746 [238/268] Linking target lib/librte_pci.so.24.1 00:04:44.746 [239/268] Linking target drivers/librte_bus_vdev.so.24.1 00:04:44.746 [240/268] Linking target lib/librte_meter.so.24.1 00:04:44.746 [241/268] Linking target lib/librte_dmadev.so.24.1 00:04:44.746 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:04:44.746 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:04:44.746 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:04:44.746 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:04:44.746 [246/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:04:45.004 [247/268] Linking target lib/librte_rcu.so.24.1 00:04:45.004 [248/268] Linking target lib/librte_mempool.so.24.1 00:04:45.004 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:04:45.004 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:04:45.004 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:04:45.004 [252/268] Linking target lib/librte_mbuf.so.24.1 00:04:45.004 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:04:45.264 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:04:45.264 [255/268] Linking target lib/librte_net.so.24.1 00:04:45.264 [256/268] Linking target lib/librte_reorder.so.24.1 00:04:45.264 [257/268] Linking target lib/librte_compressdev.so.24.1 00:04:45.264 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:04:45.264 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:04:45.522 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:04:45.522 [261/268] Linking target lib/librte_hash.so.24.1 00:04:45.522 [262/268] Linking target lib/librte_security.so.24.1 00:04:45.522 [263/268] Linking target lib/librte_cmdline.so.24.1 00:04:45.522 [264/268] Linking target lib/librte_ethdev.so.24.1 00:04:45.522 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:04:45.522 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:04:45.522 [267/268] Linking target lib/librte_power.so.24.1 00:04:45.829 [268/268] Linking target lib/librte_vhost.so.24.1 00:04:45.830 INFO: autodetecting backend as ninja 00:04:45.830 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:04:58.083 CC lib/ut_mock/mock.o 00:04:58.083 CC lib/log/log.o 00:04:58.083 CC lib/log/log_flags.o 00:04:58.083 CC lib/log/log_deprecated.o 00:04:58.083 CC lib/ut/ut.o 00:04:58.083 LIB libspdk_ut_mock.a 00:04:58.083 LIB libspdk_log.a 00:04:58.083 LIB libspdk_ut.a 00:04:58.083 SO libspdk_ut_mock.so.6.0 00:04:58.083 SO libspdk_log.so.7.1 00:04:58.083 SO libspdk_ut.so.2.0 00:04:58.083 SYMLINK libspdk_ut_mock.so 00:04:58.083 SYMLINK libspdk_ut.so 00:04:58.083 SYMLINK libspdk_log.so 00:04:58.083 CC lib/ioat/ioat.o 00:04:58.083 CC lib/util/base64.o 00:04:58.083 CC lib/util/cpuset.o 00:04:58.083 CC lib/util/bit_array.o 00:04:58.083 CC lib/dma/dma.o 00:04:58.083 CXX lib/trace_parser/trace.o 00:04:58.083 CC lib/util/crc32.o 00:04:58.083 CC lib/util/crc16.o 00:04:58.083 CC lib/util/crc32c.o 00:04:58.083 CC lib/util/crc32_ieee.o 00:04:58.083 CC lib/util/crc64.o 00:04:58.083 CC lib/util/dif.o 00:04:58.083 CC lib/util/fd.o 00:04:58.083 CC lib/util/fd_group.o 00:04:58.083 CC lib/util/file.o 00:04:58.083 CC lib/util/hexlify.o 00:04:58.083 CC lib/util/math.o 00:04:58.083 CC lib/util/iov.o 00:04:58.083 CC lib/util/net.o 00:04:58.083 CC lib/util/pipe.o 00:04:58.083 CC lib/util/strerror_tls.o 00:04:58.083 CC lib/util/string.o 00:04:58.083 CC lib/util/uuid.o 00:04:58.083 CC lib/util/xor.o 00:04:58.083 CC lib/util/zipf.o 00:04:58.083 CC lib/util/md5.o 00:04:58.083 CC lib/vfio_user/host/vfio_user_pci.o 00:04:58.083 CC lib/vfio_user/host/vfio_user.o 00:04:58.083 LIB libspdk_dma.a 00:04:58.083 SO libspdk_dma.so.5.0 00:04:58.083 LIB libspdk_ioat.a 00:04:58.083 SYMLINK libspdk_dma.so 00:04:58.083 SO libspdk_ioat.so.7.0 00:04:58.083 SYMLINK libspdk_ioat.so 00:04:58.083 LIB libspdk_vfio_user.a 00:04:58.083 SO libspdk_vfio_user.so.5.0 00:04:58.083 LIB libspdk_util.a 00:04:58.083 SYMLINK libspdk_vfio_user.so 00:04:58.083 SO libspdk_util.so.10.1 00:04:58.083 SYMLINK libspdk_util.so 00:04:58.083 LIB libspdk_trace_parser.a 00:04:58.083 SO libspdk_trace_parser.so.6.0 00:04:58.083 SYMLINK libspdk_trace_parser.so 00:04:58.083 CC lib/rdma_utils/rdma_utils.o 00:04:58.083 CC lib/json/json_parse.o 00:04:58.083 CC lib/json/json_util.o 00:04:58.083 CC lib/vmd/vmd.o 00:04:58.083 CC lib/json/json_write.o 00:04:58.083 CC lib/env_dpdk/env.o 00:04:58.083 CC lib/vmd/led.o 00:04:58.083 CC lib/env_dpdk/memory.o 00:04:58.083 CC lib/idxd/idxd.o 00:04:58.083 CC lib/env_dpdk/pci.o 00:04:58.083 CC lib/conf/conf.o 00:04:58.083 CC lib/env_dpdk/init.o 00:04:58.083 CC lib/idxd/idxd_user.o 00:04:58.083 CC lib/env_dpdk/threads.o 00:04:58.083 CC lib/env_dpdk/pci_ioat.o 00:04:58.083 CC lib/idxd/idxd_kernel.o 00:04:58.083 CC lib/env_dpdk/pci_virtio.o 00:04:58.083 CC lib/env_dpdk/pci_vmd.o 00:04:58.083 CC lib/env_dpdk/pci_idxd.o 00:04:58.083 CC lib/env_dpdk/pci_event.o 00:04:58.083 CC lib/env_dpdk/sigbus_handler.o 00:04:58.083 CC lib/env_dpdk/pci_dpdk.o 00:04:58.083 CC lib/env_dpdk/pci_dpdk_2207.o 00:04:58.083 CC lib/env_dpdk/pci_dpdk_2211.o 00:04:58.083 LIB libspdk_rdma_utils.a 00:04:58.083 LIB libspdk_conf.a 00:04:58.083 SO libspdk_rdma_utils.so.1.0 00:04:58.083 SO libspdk_conf.so.6.0 00:04:58.083 LIB libspdk_json.a 00:04:58.083 SO libspdk_json.so.6.0 00:04:58.083 SYMLINK libspdk_rdma_utils.so 00:04:58.083 SYMLINK libspdk_conf.so 00:04:58.083 SYMLINK libspdk_json.so 00:04:58.083 LIB libspdk_idxd.a 00:04:58.083 LIB libspdk_vmd.a 00:04:58.083 SO libspdk_idxd.so.12.1 00:04:58.083 SO libspdk_vmd.so.6.0 00:04:58.083 SYMLINK libspdk_idxd.so 00:04:58.083 CC lib/rdma_provider/common.o 00:04:58.083 CC lib/rdma_provider/rdma_provider_verbs.o 00:04:58.083 SYMLINK libspdk_vmd.so 00:04:58.342 CC lib/jsonrpc/jsonrpc_server.o 00:04:58.342 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:04:58.342 CC lib/jsonrpc/jsonrpc_client.o 00:04:58.342 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:04:58.342 LIB libspdk_rdma_provider.a 00:04:58.342 SO libspdk_rdma_provider.so.7.0 00:04:58.342 LIB libspdk_jsonrpc.a 00:04:58.601 SYMLINK libspdk_rdma_provider.so 00:04:58.601 SO libspdk_jsonrpc.so.6.0 00:04:58.601 SYMLINK libspdk_jsonrpc.so 00:04:58.601 LIB libspdk_env_dpdk.a 00:04:58.601 SO libspdk_env_dpdk.so.15.1 00:04:58.860 SYMLINK libspdk_env_dpdk.so 00:04:58.860 CC lib/rpc/rpc.o 00:04:59.119 LIB libspdk_rpc.a 00:04:59.119 SO libspdk_rpc.so.6.0 00:04:59.119 SYMLINK libspdk_rpc.so 00:04:59.379 CC lib/trace/trace.o 00:04:59.379 CC lib/trace/trace_flags.o 00:04:59.379 CC lib/trace/trace_rpc.o 00:04:59.379 CC lib/notify/notify.o 00:04:59.379 CC lib/notify/notify_rpc.o 00:04:59.379 CC lib/keyring/keyring.o 00:04:59.379 CC lib/keyring/keyring_rpc.o 00:04:59.638 LIB libspdk_notify.a 00:04:59.638 SO libspdk_notify.so.6.0 00:04:59.638 LIB libspdk_trace.a 00:04:59.638 LIB libspdk_keyring.a 00:04:59.638 SO libspdk_trace.so.11.0 00:04:59.638 SO libspdk_keyring.so.2.0 00:04:59.638 SYMLINK libspdk_notify.so 00:04:59.638 SYMLINK libspdk_trace.so 00:04:59.638 SYMLINK libspdk_keyring.so 00:05:00.206 CC lib/thread/thread.o 00:05:00.206 CC lib/thread/iobuf.o 00:05:00.206 CC lib/sock/sock.o 00:05:00.206 CC lib/sock/sock_rpc.o 00:05:00.465 LIB libspdk_sock.a 00:05:00.465 SO libspdk_sock.so.10.0 00:05:00.465 SYMLINK libspdk_sock.so 00:05:00.724 CC lib/nvme/nvme_ctrlr_cmd.o 00:05:00.724 CC lib/nvme/nvme_ctrlr.o 00:05:00.724 CC lib/nvme/nvme_fabric.o 00:05:00.724 CC lib/nvme/nvme_ns_cmd.o 00:05:00.724 CC lib/nvme/nvme_ns.o 00:05:00.724 CC lib/nvme/nvme_pcie_common.o 00:05:00.724 CC lib/nvme/nvme_pcie.o 00:05:00.724 CC lib/nvme/nvme_qpair.o 00:05:00.724 CC lib/nvme/nvme.o 00:05:00.724 CC lib/nvme/nvme_quirks.o 00:05:00.724 CC lib/nvme/nvme_transport.o 00:05:00.724 CC lib/nvme/nvme_discovery.o 00:05:00.724 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:05:00.724 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:05:00.724 CC lib/nvme/nvme_tcp.o 00:05:00.724 CC lib/nvme/nvme_opal.o 00:05:00.724 CC lib/nvme/nvme_io_msg.o 00:05:00.724 CC lib/nvme/nvme_poll_group.o 00:05:00.724 CC lib/nvme/nvme_zns.o 00:05:00.724 CC lib/nvme/nvme_stubs.o 00:05:00.724 CC lib/nvme/nvme_auth.o 00:05:00.725 CC lib/nvme/nvme_cuse.o 00:05:00.725 CC lib/nvme/nvme_vfio_user.o 00:05:00.725 CC lib/nvme/nvme_rdma.o 00:05:01.293 LIB libspdk_thread.a 00:05:01.293 SO libspdk_thread.so.11.0 00:05:01.293 SYMLINK libspdk_thread.so 00:05:01.552 CC lib/accel/accel.o 00:05:01.552 CC lib/accel/accel_rpc.o 00:05:01.552 CC lib/accel/accel_sw.o 00:05:01.552 CC lib/virtio/virtio.o 00:05:01.552 CC lib/virtio/virtio_vhost_user.o 00:05:01.552 CC lib/virtio/virtio_vfio_user.o 00:05:01.552 CC lib/virtio/virtio_pci.o 00:05:01.552 CC lib/blob/blobstore.o 00:05:01.552 CC lib/blob/request.o 00:05:01.552 CC lib/blob/zeroes.o 00:05:01.552 CC lib/blob/blob_bs_dev.o 00:05:01.552 CC lib/vfu_tgt/tgt_endpoint.o 00:05:01.552 CC lib/vfu_tgt/tgt_rpc.o 00:05:01.552 CC lib/fsdev/fsdev.o 00:05:01.552 CC lib/init/json_config.o 00:05:01.552 CC lib/init/subsystem.o 00:05:01.552 CC lib/fsdev/fsdev_io.o 00:05:01.552 CC lib/fsdev/fsdev_rpc.o 00:05:01.552 CC lib/init/subsystem_rpc.o 00:05:01.552 CC lib/init/rpc.o 00:05:01.811 LIB libspdk_init.a 00:05:01.811 SO libspdk_init.so.6.0 00:05:01.811 LIB libspdk_vfu_tgt.a 00:05:01.811 LIB libspdk_virtio.a 00:05:01.811 SO libspdk_vfu_tgt.so.3.0 00:05:01.811 SO libspdk_virtio.so.7.0 00:05:01.811 SYMLINK libspdk_init.so 00:05:01.811 SYMLINK libspdk_vfu_tgt.so 00:05:01.811 SYMLINK libspdk_virtio.so 00:05:02.070 LIB libspdk_fsdev.a 00:05:02.070 SO libspdk_fsdev.so.2.0 00:05:02.070 CC lib/event/app.o 00:05:02.070 CC lib/event/reactor.o 00:05:02.070 CC lib/event/app_rpc.o 00:05:02.070 CC lib/event/log_rpc.o 00:05:02.070 CC lib/event/scheduler_static.o 00:05:02.070 SYMLINK libspdk_fsdev.so 00:05:02.329 LIB libspdk_accel.a 00:05:02.329 SO libspdk_accel.so.16.0 00:05:02.329 SYMLINK libspdk_accel.so 00:05:02.587 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:05:02.588 LIB libspdk_nvme.a 00:05:02.588 LIB libspdk_event.a 00:05:02.588 SO libspdk_event.so.14.0 00:05:02.588 SO libspdk_nvme.so.15.0 00:05:02.588 SYMLINK libspdk_event.so 00:05:02.846 CC lib/bdev/bdev.o 00:05:02.846 CC lib/bdev/bdev_rpc.o 00:05:02.846 CC lib/bdev/bdev_zone.o 00:05:02.846 CC lib/bdev/part.o 00:05:02.846 CC lib/bdev/scsi_nvme.o 00:05:02.846 SYMLINK libspdk_nvme.so 00:05:02.846 LIB libspdk_fuse_dispatcher.a 00:05:03.105 SO libspdk_fuse_dispatcher.so.1.0 00:05:03.105 SYMLINK libspdk_fuse_dispatcher.so 00:05:03.673 LIB libspdk_blob.a 00:05:03.673 SO libspdk_blob.so.11.0 00:05:03.931 SYMLINK libspdk_blob.so 00:05:04.189 CC lib/blobfs/blobfs.o 00:05:04.189 CC lib/blobfs/tree.o 00:05:04.189 CC lib/lvol/lvol.o 00:05:04.448 LIB libspdk_bdev.a 00:05:04.705 SO libspdk_bdev.so.17.0 00:05:04.706 SYMLINK libspdk_bdev.so 00:05:04.706 LIB libspdk_blobfs.a 00:05:04.706 SO libspdk_blobfs.so.10.0 00:05:04.706 LIB libspdk_lvol.a 00:05:04.706 SYMLINK libspdk_blobfs.so 00:05:04.706 SO libspdk_lvol.so.10.0 00:05:04.963 SYMLINK libspdk_lvol.so 00:05:04.963 CC lib/nbd/nbd.o 00:05:04.963 CC lib/nbd/nbd_rpc.o 00:05:04.963 CC lib/scsi/dev.o 00:05:04.963 CC lib/scsi/lun.o 00:05:04.963 CC lib/scsi/port.o 00:05:04.963 CC lib/scsi/scsi.o 00:05:04.963 CC lib/nvmf/ctrlr.o 00:05:04.963 CC lib/scsi/scsi_bdev.o 00:05:04.963 CC lib/nvmf/ctrlr_discovery.o 00:05:04.963 CC lib/scsi/scsi_pr.o 00:05:04.963 CC lib/scsi/scsi_rpc.o 00:05:04.963 CC lib/ftl/ftl_core.o 00:05:04.963 CC lib/nvmf/ctrlr_bdev.o 00:05:04.963 CC lib/nvmf/subsystem.o 00:05:04.963 CC lib/ftl/ftl_init.o 00:05:04.963 CC lib/scsi/task.o 00:05:04.963 CC lib/ftl/ftl_layout.o 00:05:04.963 CC lib/nvmf/nvmf.o 00:05:04.963 CC lib/ftl/ftl_debug.o 00:05:04.963 CC lib/nvmf/nvmf_rpc.o 00:05:04.963 CC lib/ublk/ublk.o 00:05:04.963 CC lib/ftl/ftl_io.o 00:05:04.963 CC lib/nvmf/transport.o 00:05:04.963 CC lib/ublk/ublk_rpc.o 00:05:04.963 CC lib/nvmf/tcp.o 00:05:04.963 CC lib/ftl/ftl_sb.o 00:05:04.963 CC lib/ftl/ftl_l2p.o 00:05:04.963 CC lib/nvmf/stubs.o 00:05:04.963 CC lib/ftl/ftl_l2p_flat.o 00:05:04.963 CC lib/nvmf/mdns_server.o 00:05:04.963 CC lib/nvmf/vfio_user.o 00:05:04.963 CC lib/ftl/ftl_nv_cache.o 00:05:04.963 CC lib/ftl/ftl_band.o 00:05:04.963 CC lib/nvmf/rdma.o 00:05:04.963 CC lib/nvmf/auth.o 00:05:04.963 CC lib/ftl/ftl_band_ops.o 00:05:04.963 CC lib/ftl/ftl_writer.o 00:05:04.963 CC lib/ftl/ftl_rq.o 00:05:04.963 CC lib/ftl/ftl_reloc.o 00:05:04.963 CC lib/ftl/ftl_l2p_cache.o 00:05:04.963 CC lib/ftl/ftl_p2l.o 00:05:04.963 CC lib/ftl/mngt/ftl_mngt.o 00:05:04.963 CC lib/ftl/ftl_p2l_log.o 00:05:04.963 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:05:04.963 CC lib/ftl/mngt/ftl_mngt_startup.o 00:05:04.963 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:05:04.963 CC lib/ftl/mngt/ftl_mngt_md.o 00:05:04.963 CC lib/ftl/mngt/ftl_mngt_misc.o 00:05:04.963 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:05:04.963 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:05:04.963 CC lib/ftl/mngt/ftl_mngt_band.o 00:05:04.963 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:05:04.963 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:05:04.963 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:05:04.963 CC lib/ftl/utils/ftl_conf.o 00:05:04.963 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:05:04.963 CC lib/ftl/utils/ftl_md.o 00:05:04.963 CC lib/ftl/utils/ftl_mempool.o 00:05:04.963 CC lib/ftl/utils/ftl_bitmap.o 00:05:04.963 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:05:04.963 CC lib/ftl/utils/ftl_property.o 00:05:04.963 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:05:04.963 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:05:04.963 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:05:04.963 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:05:04.963 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:05:04.963 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:05:04.963 CC lib/ftl/nvc/ftl_nvc_dev.o 00:05:04.963 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:05:04.963 CC lib/ftl/upgrade/ftl_sb_v3.o 00:05:04.963 CC lib/ftl/upgrade/ftl_sb_v5.o 00:05:04.963 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:05:04.963 CC lib/ftl/base/ftl_base_bdev.o 00:05:04.963 CC lib/ftl/ftl_trace.o 00:05:04.963 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:05:04.963 CC lib/ftl/base/ftl_base_dev.o 00:05:05.527 LIB libspdk_nbd.a 00:05:05.527 LIB libspdk_scsi.a 00:05:05.786 SO libspdk_nbd.so.7.0 00:05:05.786 LIB libspdk_ublk.a 00:05:05.786 SO libspdk_scsi.so.9.0 00:05:05.786 SO libspdk_ublk.so.3.0 00:05:05.786 SYMLINK libspdk_nbd.so 00:05:05.786 SYMLINK libspdk_scsi.so 00:05:05.786 SYMLINK libspdk_ublk.so 00:05:06.044 LIB libspdk_ftl.a 00:05:06.044 CC lib/iscsi/conn.o 00:05:06.044 CC lib/iscsi/init_grp.o 00:05:06.044 CC lib/iscsi/iscsi.o 00:05:06.044 CC lib/iscsi/param.o 00:05:06.044 CC lib/iscsi/portal_grp.o 00:05:06.044 CC lib/iscsi/tgt_node.o 00:05:06.044 CC lib/iscsi/iscsi_subsystem.o 00:05:06.044 CC lib/iscsi/iscsi_rpc.o 00:05:06.044 CC lib/vhost/vhost.o 00:05:06.044 CC lib/iscsi/task.o 00:05:06.044 CC lib/vhost/vhost_rpc.o 00:05:06.044 CC lib/vhost/vhost_scsi.o 00:05:06.044 CC lib/vhost/vhost_blk.o 00:05:06.044 CC lib/vhost/rte_vhost_user.o 00:05:06.302 SO libspdk_ftl.so.9.0 00:05:06.302 SYMLINK libspdk_ftl.so 00:05:06.869 LIB libspdk_nvmf.a 00:05:06.869 LIB libspdk_vhost.a 00:05:06.869 SO libspdk_nvmf.so.20.0 00:05:06.869 SO libspdk_vhost.so.8.0 00:05:07.128 SYMLINK libspdk_vhost.so 00:05:07.128 SYMLINK libspdk_nvmf.so 00:05:07.128 LIB libspdk_iscsi.a 00:05:07.128 SO libspdk_iscsi.so.8.0 00:05:07.128 SYMLINK libspdk_iscsi.so 00:05:07.697 CC module/env_dpdk/env_dpdk_rpc.o 00:05:07.697 CC module/vfu_device/vfu_virtio.o 00:05:07.697 CC module/vfu_device/vfu_virtio_blk.o 00:05:07.697 CC module/vfu_device/vfu_virtio_scsi.o 00:05:07.697 CC module/vfu_device/vfu_virtio_rpc.o 00:05:07.697 CC module/vfu_device/vfu_virtio_fs.o 00:05:07.955 CC module/scheduler/gscheduler/gscheduler.o 00:05:07.955 CC module/keyring/file/keyring.o 00:05:07.955 CC module/keyring/file/keyring_rpc.o 00:05:07.955 CC module/accel/iaa/accel_iaa.o 00:05:07.955 CC module/accel/dsa/accel_dsa_rpc.o 00:05:07.955 CC module/accel/dsa/accel_dsa.o 00:05:07.955 CC module/accel/iaa/accel_iaa_rpc.o 00:05:07.955 LIB libspdk_env_dpdk_rpc.a 00:05:07.955 CC module/accel/error/accel_error.o 00:05:07.955 CC module/accel/ioat/accel_ioat.o 00:05:07.955 CC module/scheduler/dynamic/scheduler_dynamic.o 00:05:07.955 CC module/accel/error/accel_error_rpc.o 00:05:07.955 CC module/accel/ioat/accel_ioat_rpc.o 00:05:07.955 CC module/sock/posix/posix.o 00:05:07.955 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:05:07.955 CC module/fsdev/aio/fsdev_aio.o 00:05:07.955 CC module/fsdev/aio/fsdev_aio_rpc.o 00:05:07.955 CC module/keyring/linux/keyring_rpc.o 00:05:07.955 CC module/keyring/linux/keyring.o 00:05:07.955 CC module/fsdev/aio/linux_aio_mgr.o 00:05:07.955 CC module/blob/bdev/blob_bdev.o 00:05:07.955 SO libspdk_env_dpdk_rpc.so.6.0 00:05:07.955 SYMLINK libspdk_env_dpdk_rpc.so 00:05:07.955 LIB libspdk_scheduler_gscheduler.a 00:05:07.955 LIB libspdk_keyring_file.a 00:05:07.955 LIB libspdk_scheduler_dpdk_governor.a 00:05:07.955 LIB libspdk_keyring_linux.a 00:05:08.213 SO libspdk_scheduler_gscheduler.so.4.0 00:05:08.213 SO libspdk_keyring_file.so.2.0 00:05:08.213 SO libspdk_scheduler_dpdk_governor.so.4.0 00:05:08.213 SO libspdk_keyring_linux.so.1.0 00:05:08.213 LIB libspdk_accel_ioat.a 00:05:08.213 LIB libspdk_scheduler_dynamic.a 00:05:08.213 LIB libspdk_accel_iaa.a 00:05:08.213 LIB libspdk_accel_error.a 00:05:08.213 SO libspdk_accel_ioat.so.6.0 00:05:08.213 SO libspdk_accel_iaa.so.3.0 00:05:08.213 SYMLINK libspdk_scheduler_dpdk_governor.so 00:05:08.213 SO libspdk_scheduler_dynamic.so.4.0 00:05:08.213 SYMLINK libspdk_scheduler_gscheduler.so 00:05:08.213 SO libspdk_accel_error.so.2.0 00:05:08.213 SYMLINK libspdk_keyring_file.so 00:05:08.213 SYMLINK libspdk_keyring_linux.so 00:05:08.213 LIB libspdk_accel_dsa.a 00:05:08.213 LIB libspdk_blob_bdev.a 00:05:08.213 SYMLINK libspdk_accel_ioat.so 00:05:08.213 SYMLINK libspdk_scheduler_dynamic.so 00:05:08.213 SYMLINK libspdk_accel_iaa.so 00:05:08.213 SO libspdk_accel_dsa.so.5.0 00:05:08.213 SYMLINK libspdk_accel_error.so 00:05:08.213 SO libspdk_blob_bdev.so.11.0 00:05:08.213 SYMLINK libspdk_accel_dsa.so 00:05:08.213 SYMLINK libspdk_blob_bdev.so 00:05:08.213 LIB libspdk_vfu_device.a 00:05:08.214 SO libspdk_vfu_device.so.3.0 00:05:08.471 SYMLINK libspdk_vfu_device.so 00:05:08.471 LIB libspdk_fsdev_aio.a 00:05:08.471 SO libspdk_fsdev_aio.so.1.0 00:05:08.471 LIB libspdk_sock_posix.a 00:05:08.472 SO libspdk_sock_posix.so.6.0 00:05:08.472 SYMLINK libspdk_fsdev_aio.so 00:05:08.730 SYMLINK libspdk_sock_posix.so 00:05:08.730 CC module/bdev/delay/vbdev_delay_rpc.o 00:05:08.730 CC module/bdev/delay/vbdev_delay.o 00:05:08.730 CC module/blobfs/bdev/blobfs_bdev.o 00:05:08.730 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:05:08.730 CC module/bdev/malloc/bdev_malloc.o 00:05:08.730 CC module/bdev/split/vbdev_split.o 00:05:08.730 CC module/bdev/malloc/bdev_malloc_rpc.o 00:05:08.730 CC module/bdev/null/bdev_null.o 00:05:08.730 CC module/bdev/split/vbdev_split_rpc.o 00:05:08.731 CC module/bdev/null/bdev_null_rpc.o 00:05:08.731 CC module/bdev/ftl/bdev_ftl.o 00:05:08.731 CC module/bdev/ftl/bdev_ftl_rpc.o 00:05:08.731 CC module/bdev/passthru/vbdev_passthru.o 00:05:08.731 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:05:08.731 CC module/bdev/gpt/gpt.o 00:05:08.731 CC module/bdev/zone_block/vbdev_zone_block.o 00:05:08.731 CC module/bdev/gpt/vbdev_gpt.o 00:05:08.731 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:05:08.731 CC module/bdev/nvme/bdev_nvme.o 00:05:08.731 CC module/bdev/nvme/bdev_nvme_rpc.o 00:05:08.731 CC module/bdev/error/vbdev_error_rpc.o 00:05:08.731 CC module/bdev/iscsi/bdev_iscsi.o 00:05:08.731 CC module/bdev/error/vbdev_error.o 00:05:08.731 CC module/bdev/nvme/nvme_rpc.o 00:05:08.731 CC module/bdev/nvme/bdev_mdns_client.o 00:05:08.731 CC module/bdev/raid/bdev_raid_sb.o 00:05:08.731 CC module/bdev/nvme/vbdev_opal_rpc.o 00:05:08.731 CC module/bdev/raid/bdev_raid.o 00:05:08.731 CC module/bdev/nvme/vbdev_opal.o 00:05:08.731 CC module/bdev/raid/bdev_raid_rpc.o 00:05:08.731 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:05:08.731 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:05:08.731 CC module/bdev/aio/bdev_aio.o 00:05:08.731 CC module/bdev/raid/raid0.o 00:05:08.731 CC module/bdev/raid/raid1.o 00:05:08.731 CC module/bdev/aio/bdev_aio_rpc.o 00:05:08.731 CC module/bdev/raid/concat.o 00:05:08.731 CC module/bdev/lvol/vbdev_lvol.o 00:05:08.731 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:05:08.731 CC module/bdev/virtio/bdev_virtio_scsi.o 00:05:08.731 CC module/bdev/virtio/bdev_virtio_rpc.o 00:05:08.731 CC module/bdev/virtio/bdev_virtio_blk.o 00:05:08.988 LIB libspdk_blobfs_bdev.a 00:05:08.988 SO libspdk_blobfs_bdev.so.6.0 00:05:08.988 LIB libspdk_bdev_split.a 00:05:08.988 SYMLINK libspdk_blobfs_bdev.so 00:05:08.988 SO libspdk_bdev_split.so.6.0 00:05:08.988 LIB libspdk_bdev_null.a 00:05:08.988 SO libspdk_bdev_null.so.6.0 00:05:09.246 LIB libspdk_bdev_gpt.a 00:05:09.246 LIB libspdk_bdev_ftl.a 00:05:09.246 LIB libspdk_bdev_error.a 00:05:09.246 SYMLINK libspdk_bdev_split.so 00:05:09.246 SO libspdk_bdev_gpt.so.6.0 00:05:09.246 SO libspdk_bdev_ftl.so.6.0 00:05:09.246 LIB libspdk_bdev_malloc.a 00:05:09.246 LIB libspdk_bdev_passthru.a 00:05:09.246 SO libspdk_bdev_error.so.6.0 00:05:09.246 SYMLINK libspdk_bdev_null.so 00:05:09.246 LIB libspdk_bdev_aio.a 00:05:09.246 LIB libspdk_bdev_delay.a 00:05:09.246 LIB libspdk_bdev_zone_block.a 00:05:09.246 SO libspdk_bdev_passthru.so.6.0 00:05:09.246 SO libspdk_bdev_malloc.so.6.0 00:05:09.246 LIB libspdk_bdev_iscsi.a 00:05:09.246 SO libspdk_bdev_aio.so.6.0 00:05:09.246 SYMLINK libspdk_bdev_gpt.so 00:05:09.246 SO libspdk_bdev_delay.so.6.0 00:05:09.246 SYMLINK libspdk_bdev_ftl.so 00:05:09.246 SYMLINK libspdk_bdev_error.so 00:05:09.246 SO libspdk_bdev_zone_block.so.6.0 00:05:09.246 SO libspdk_bdev_iscsi.so.6.0 00:05:09.246 SYMLINK libspdk_bdev_malloc.so 00:05:09.246 SYMLINK libspdk_bdev_passthru.so 00:05:09.246 SYMLINK libspdk_bdev_aio.so 00:05:09.246 SYMLINK libspdk_bdev_delay.so 00:05:09.246 SYMLINK libspdk_bdev_zone_block.so 00:05:09.246 LIB libspdk_bdev_virtio.a 00:05:09.246 SYMLINK libspdk_bdev_iscsi.so 00:05:09.246 LIB libspdk_bdev_lvol.a 00:05:09.246 SO libspdk_bdev_virtio.so.6.0 00:05:09.246 SO libspdk_bdev_lvol.so.6.0 00:05:09.504 SYMLINK libspdk_bdev_virtio.so 00:05:09.504 SYMLINK libspdk_bdev_lvol.so 00:05:09.763 LIB libspdk_bdev_raid.a 00:05:09.763 SO libspdk_bdev_raid.so.6.0 00:05:09.763 SYMLINK libspdk_bdev_raid.so 00:05:10.699 LIB libspdk_bdev_nvme.a 00:05:10.699 SO libspdk_bdev_nvme.so.7.1 00:05:10.699 SYMLINK libspdk_bdev_nvme.so 00:05:11.268 CC module/event/subsystems/keyring/keyring.o 00:05:11.268 CC module/event/subsystems/iobuf/iobuf.o 00:05:11.268 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:05:11.268 CC module/event/subsystems/vmd/vmd.o 00:05:11.268 CC module/event/subsystems/vmd/vmd_rpc.o 00:05:11.268 CC module/event/subsystems/fsdev/fsdev.o 00:05:11.526 CC module/event/subsystems/scheduler/scheduler.o 00:05:11.526 CC module/event/subsystems/sock/sock.o 00:05:11.526 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:05:11.526 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:05:11.526 LIB libspdk_event_keyring.a 00:05:11.526 LIB libspdk_event_sock.a 00:05:11.526 LIB libspdk_event_fsdev.a 00:05:11.526 SO libspdk_event_keyring.so.1.0 00:05:11.526 LIB libspdk_event_vhost_blk.a 00:05:11.526 LIB libspdk_event_iobuf.a 00:05:11.526 LIB libspdk_event_vmd.a 00:05:11.526 LIB libspdk_event_scheduler.a 00:05:11.526 LIB libspdk_event_vfu_tgt.a 00:05:11.526 SO libspdk_event_sock.so.5.0 00:05:11.526 SO libspdk_event_vhost_blk.so.3.0 00:05:11.526 SO libspdk_event_fsdev.so.1.0 00:05:11.526 SO libspdk_event_iobuf.so.3.0 00:05:11.526 SO libspdk_event_scheduler.so.4.0 00:05:11.526 SO libspdk_event_vfu_tgt.so.3.0 00:05:11.526 SO libspdk_event_vmd.so.6.0 00:05:11.526 SYMLINK libspdk_event_keyring.so 00:05:11.526 SYMLINK libspdk_event_sock.so 00:05:11.526 SYMLINK libspdk_event_vhost_blk.so 00:05:11.526 SYMLINK libspdk_event_fsdev.so 00:05:11.526 SYMLINK libspdk_event_iobuf.so 00:05:11.785 SYMLINK libspdk_event_vfu_tgt.so 00:05:11.785 SYMLINK libspdk_event_scheduler.so 00:05:11.785 SYMLINK libspdk_event_vmd.so 00:05:12.044 CC module/event/subsystems/accel/accel.o 00:05:12.044 LIB libspdk_event_accel.a 00:05:12.044 SO libspdk_event_accel.so.6.0 00:05:12.303 SYMLINK libspdk_event_accel.so 00:05:12.562 CC module/event/subsystems/bdev/bdev.o 00:05:12.562 LIB libspdk_event_bdev.a 00:05:12.821 SO libspdk_event_bdev.so.6.0 00:05:12.821 SYMLINK libspdk_event_bdev.so 00:05:13.081 CC module/event/subsystems/scsi/scsi.o 00:05:13.081 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:05:13.081 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:05:13.081 CC module/event/subsystems/ublk/ublk.o 00:05:13.081 CC module/event/subsystems/nbd/nbd.o 00:05:13.343 LIB libspdk_event_scsi.a 00:05:13.343 LIB libspdk_event_ublk.a 00:05:13.343 LIB libspdk_event_nbd.a 00:05:13.343 SO libspdk_event_scsi.so.6.0 00:05:13.343 SO libspdk_event_ublk.so.3.0 00:05:13.343 SO libspdk_event_nbd.so.6.0 00:05:13.343 LIB libspdk_event_nvmf.a 00:05:13.343 SYMLINK libspdk_event_scsi.so 00:05:13.343 SO libspdk_event_nvmf.so.6.0 00:05:13.343 SYMLINK libspdk_event_ublk.so 00:05:13.343 SYMLINK libspdk_event_nbd.so 00:05:13.343 SYMLINK libspdk_event_nvmf.so 00:05:13.603 CC module/event/subsystems/iscsi/iscsi.o 00:05:13.603 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:05:13.862 LIB libspdk_event_vhost_scsi.a 00:05:13.862 LIB libspdk_event_iscsi.a 00:05:13.862 SO libspdk_event_vhost_scsi.so.3.0 00:05:13.862 SO libspdk_event_iscsi.so.6.0 00:05:13.862 SYMLINK libspdk_event_vhost_scsi.so 00:05:13.862 SYMLINK libspdk_event_iscsi.so 00:05:14.121 SO libspdk.so.6.0 00:05:14.121 SYMLINK libspdk.so 00:05:14.379 CC app/trace_record/trace_record.o 00:05:14.379 CC app/spdk_top/spdk_top.o 00:05:14.379 CXX app/trace/trace.o 00:05:14.379 CC app/spdk_nvme_perf/perf.o 00:05:14.379 CC app/spdk_nvme_identify/identify.o 00:05:14.379 CC test/rpc_client/rpc_client_test.o 00:05:14.379 TEST_HEADER include/spdk/accel_module.h 00:05:14.379 TEST_HEADER include/spdk/accel.h 00:05:14.379 TEST_HEADER include/spdk/assert.h 00:05:14.379 TEST_HEADER include/spdk/barrier.h 00:05:14.379 CC app/spdk_nvme_discover/discovery_aer.o 00:05:14.379 TEST_HEADER include/spdk/bdev.h 00:05:14.379 TEST_HEADER include/spdk/base64.h 00:05:14.379 TEST_HEADER include/spdk/bit_array.h 00:05:14.379 TEST_HEADER include/spdk/bdev_module.h 00:05:14.379 TEST_HEADER include/spdk/bdev_zone.h 00:05:14.379 TEST_HEADER include/spdk/blob_bdev.h 00:05:14.379 TEST_HEADER include/spdk/bit_pool.h 00:05:14.379 TEST_HEADER include/spdk/blobfs_bdev.h 00:05:14.379 TEST_HEADER include/spdk/blobfs.h 00:05:14.379 TEST_HEADER include/spdk/blob.h 00:05:14.379 TEST_HEADER include/spdk/conf.h 00:05:14.379 CC app/spdk_lspci/spdk_lspci.o 00:05:14.379 TEST_HEADER include/spdk/config.h 00:05:14.379 TEST_HEADER include/spdk/crc16.h 00:05:14.379 TEST_HEADER include/spdk/cpuset.h 00:05:14.379 TEST_HEADER include/spdk/crc64.h 00:05:14.379 TEST_HEADER include/spdk/crc32.h 00:05:14.379 TEST_HEADER include/spdk/dif.h 00:05:14.379 TEST_HEADER include/spdk/endian.h 00:05:14.379 TEST_HEADER include/spdk/dma.h 00:05:14.379 TEST_HEADER include/spdk/env.h 00:05:14.379 TEST_HEADER include/spdk/env_dpdk.h 00:05:14.379 TEST_HEADER include/spdk/event.h 00:05:14.379 TEST_HEADER include/spdk/fd_group.h 00:05:14.379 TEST_HEADER include/spdk/fd.h 00:05:14.379 TEST_HEADER include/spdk/file.h 00:05:14.379 TEST_HEADER include/spdk/fsdev.h 00:05:14.379 TEST_HEADER include/spdk/fsdev_module.h 00:05:14.379 TEST_HEADER include/spdk/ftl.h 00:05:14.379 TEST_HEADER include/spdk/fuse_dispatcher.h 00:05:14.379 TEST_HEADER include/spdk/gpt_spec.h 00:05:14.379 TEST_HEADER include/spdk/hexlify.h 00:05:14.379 CC examples/interrupt_tgt/interrupt_tgt.o 00:05:14.379 TEST_HEADER include/spdk/histogram_data.h 00:05:14.379 TEST_HEADER include/spdk/idxd.h 00:05:14.379 TEST_HEADER include/spdk/idxd_spec.h 00:05:14.379 TEST_HEADER include/spdk/init.h 00:05:14.379 TEST_HEADER include/spdk/ioat.h 00:05:14.379 TEST_HEADER include/spdk/iscsi_spec.h 00:05:14.379 TEST_HEADER include/spdk/ioat_spec.h 00:05:14.379 TEST_HEADER include/spdk/json.h 00:05:14.379 CC app/nvmf_tgt/nvmf_main.o 00:05:14.379 TEST_HEADER include/spdk/jsonrpc.h 00:05:14.379 TEST_HEADER include/spdk/keyring.h 00:05:14.379 CC app/iscsi_tgt/iscsi_tgt.o 00:05:14.379 TEST_HEADER include/spdk/keyring_module.h 00:05:14.379 TEST_HEADER include/spdk/likely.h 00:05:14.379 CC app/spdk_dd/spdk_dd.o 00:05:14.379 TEST_HEADER include/spdk/lvol.h 00:05:14.379 TEST_HEADER include/spdk/log.h 00:05:14.379 TEST_HEADER include/spdk/md5.h 00:05:14.379 TEST_HEADER include/spdk/memory.h 00:05:14.379 TEST_HEADER include/spdk/mmio.h 00:05:14.379 TEST_HEADER include/spdk/nbd.h 00:05:14.379 TEST_HEADER include/spdk/net.h 00:05:14.379 TEST_HEADER include/spdk/nvme.h 00:05:14.379 TEST_HEADER include/spdk/notify.h 00:05:14.379 TEST_HEADER include/spdk/nvme_ocssd.h 00:05:14.379 TEST_HEADER include/spdk/nvme_intel.h 00:05:14.379 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:05:14.379 TEST_HEADER include/spdk/nvme_zns.h 00:05:14.379 TEST_HEADER include/spdk/nvme_spec.h 00:05:14.379 TEST_HEADER include/spdk/nvmf_cmd.h 00:05:14.379 TEST_HEADER include/spdk/nvmf.h 00:05:14.379 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:05:14.379 TEST_HEADER include/spdk/nvmf_transport.h 00:05:14.379 TEST_HEADER include/spdk/opal.h 00:05:14.379 TEST_HEADER include/spdk/nvmf_spec.h 00:05:14.379 TEST_HEADER include/spdk/opal_spec.h 00:05:14.379 TEST_HEADER include/spdk/pci_ids.h 00:05:14.379 TEST_HEADER include/spdk/pipe.h 00:05:14.379 TEST_HEADER include/spdk/queue.h 00:05:14.380 TEST_HEADER include/spdk/rpc.h 00:05:14.380 TEST_HEADER include/spdk/reduce.h 00:05:14.380 TEST_HEADER include/spdk/scheduler.h 00:05:14.380 TEST_HEADER include/spdk/scsi.h 00:05:14.380 TEST_HEADER include/spdk/sock.h 00:05:14.380 TEST_HEADER include/spdk/string.h 00:05:14.380 TEST_HEADER include/spdk/scsi_spec.h 00:05:14.380 TEST_HEADER include/spdk/stdinc.h 00:05:14.380 TEST_HEADER include/spdk/thread.h 00:05:14.380 TEST_HEADER include/spdk/trace_parser.h 00:05:14.380 TEST_HEADER include/spdk/tree.h 00:05:14.380 TEST_HEADER include/spdk/trace.h 00:05:14.380 TEST_HEADER include/spdk/util.h 00:05:14.380 TEST_HEADER include/spdk/uuid.h 00:05:14.380 TEST_HEADER include/spdk/ublk.h 00:05:14.380 TEST_HEADER include/spdk/vfio_user_pci.h 00:05:14.380 TEST_HEADER include/spdk/vfio_user_spec.h 00:05:14.380 TEST_HEADER include/spdk/vhost.h 00:05:14.380 TEST_HEADER include/spdk/version.h 00:05:14.380 TEST_HEADER include/spdk/xor.h 00:05:14.380 TEST_HEADER include/spdk/vmd.h 00:05:14.380 TEST_HEADER include/spdk/zipf.h 00:05:14.380 CXX test/cpp_headers/accel.o 00:05:14.380 CXX test/cpp_headers/accel_module.o 00:05:14.380 CXX test/cpp_headers/assert.o 00:05:14.380 CXX test/cpp_headers/barrier.o 00:05:14.380 CXX test/cpp_headers/base64.o 00:05:14.380 CXX test/cpp_headers/bdev.o 00:05:14.380 CXX test/cpp_headers/bdev_zone.o 00:05:14.380 CXX test/cpp_headers/bit_array.o 00:05:14.380 CXX test/cpp_headers/bdev_module.o 00:05:14.380 CXX test/cpp_headers/blob_bdev.o 00:05:14.380 CXX test/cpp_headers/bit_pool.o 00:05:14.380 CXX test/cpp_headers/blob.o 00:05:14.380 CXX test/cpp_headers/blobfs_bdev.o 00:05:14.380 CXX test/cpp_headers/config.o 00:05:14.380 CXX test/cpp_headers/blobfs.o 00:05:14.380 CXX test/cpp_headers/conf.o 00:05:14.380 CXX test/cpp_headers/crc16.o 00:05:14.380 CXX test/cpp_headers/cpuset.o 00:05:14.380 CXX test/cpp_headers/crc64.o 00:05:14.380 CXX test/cpp_headers/dma.o 00:05:14.380 CXX test/cpp_headers/env_dpdk.o 00:05:14.380 CXX test/cpp_headers/crc32.o 00:05:14.380 CXX test/cpp_headers/dif.o 00:05:14.380 CXX test/cpp_headers/env.o 00:05:14.380 CXX test/cpp_headers/event.o 00:05:14.380 CXX test/cpp_headers/endian.o 00:05:14.380 CC app/spdk_tgt/spdk_tgt.o 00:05:14.380 CXX test/cpp_headers/file.o 00:05:14.380 CXX test/cpp_headers/fd_group.o 00:05:14.380 CXX test/cpp_headers/fsdev.o 00:05:14.380 CXX test/cpp_headers/fd.o 00:05:14.380 CXX test/cpp_headers/fsdev_module.o 00:05:14.380 CXX test/cpp_headers/fuse_dispatcher.o 00:05:14.380 CXX test/cpp_headers/ftl.o 00:05:14.380 CXX test/cpp_headers/gpt_spec.o 00:05:14.380 CXX test/cpp_headers/idxd.o 00:05:14.380 CXX test/cpp_headers/histogram_data.o 00:05:14.380 CXX test/cpp_headers/hexlify.o 00:05:14.380 CXX test/cpp_headers/idxd_spec.o 00:05:14.380 CXX test/cpp_headers/ioat.o 00:05:14.380 CXX test/cpp_headers/init.o 00:05:14.380 CXX test/cpp_headers/iscsi_spec.o 00:05:14.380 CXX test/cpp_headers/ioat_spec.o 00:05:14.380 CXX test/cpp_headers/json.o 00:05:14.380 CXX test/cpp_headers/jsonrpc.o 00:05:14.380 CXX test/cpp_headers/keyring.o 00:05:14.380 CXX test/cpp_headers/keyring_module.o 00:05:14.653 CXX test/cpp_headers/log.o 00:05:14.653 CXX test/cpp_headers/likely.o 00:05:14.653 CXX test/cpp_headers/lvol.o 00:05:14.653 CXX test/cpp_headers/md5.o 00:05:14.653 CXX test/cpp_headers/memory.o 00:05:14.653 CXX test/cpp_headers/nbd.o 00:05:14.653 CXX test/cpp_headers/mmio.o 00:05:14.653 CXX test/cpp_headers/net.o 00:05:14.653 CXX test/cpp_headers/notify.o 00:05:14.653 CXX test/cpp_headers/nvme.o 00:05:14.653 CXX test/cpp_headers/nvme_ocssd.o 00:05:14.653 CXX test/cpp_headers/nvme_intel.o 00:05:14.653 CXX test/cpp_headers/nvme_ocssd_spec.o 00:05:14.653 CXX test/cpp_headers/nvme_spec.o 00:05:14.653 CXX test/cpp_headers/nvme_zns.o 00:05:14.653 CXX test/cpp_headers/nvmf_cmd.o 00:05:14.653 CXX test/cpp_headers/nvmf_fc_spec.o 00:05:14.653 CXX test/cpp_headers/nvmf.o 00:05:14.653 CXX test/cpp_headers/nvmf_spec.o 00:05:14.653 CXX test/cpp_headers/nvmf_transport.o 00:05:14.653 CXX test/cpp_headers/opal.o 00:05:14.653 CC examples/util/zipf/zipf.o 00:05:14.653 CC test/env/memory/memory_ut.o 00:05:14.653 CXX test/cpp_headers/opal_spec.o 00:05:14.653 CC examples/ioat/perf/perf.o 00:05:14.653 CC test/app/jsoncat/jsoncat.o 00:05:14.653 CC test/env/pci/pci_ut.o 00:05:14.653 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:05:14.653 CC app/fio/nvme/fio_plugin.o 00:05:14.653 CC examples/ioat/verify/verify.o 00:05:14.653 CC test/env/vtophys/vtophys.o 00:05:14.653 CC test/app/histogram_perf/histogram_perf.o 00:05:14.653 CC test/thread/poller_perf/poller_perf.o 00:05:14.653 CC test/app/stub/stub.o 00:05:14.653 CC test/app/bdev_svc/bdev_svc.o 00:05:14.653 CC app/fio/bdev/fio_plugin.o 00:05:14.653 CC test/dma/test_dma/test_dma.o 00:05:14.929 LINK spdk_lspci 00:05:14.929 LINK spdk_nvme_discover 00:05:14.929 LINK rpc_client_test 00:05:14.929 LINK interrupt_tgt 00:05:14.929 LINK spdk_trace_record 00:05:14.929 LINK iscsi_tgt 00:05:15.191 CC test/env/mem_callbacks/mem_callbacks.o 00:05:15.191 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:05:15.191 LINK nvmf_tgt 00:05:15.191 LINK jsoncat 00:05:15.191 LINK spdk_tgt 00:05:15.191 CXX test/cpp_headers/pci_ids.o 00:05:15.191 CXX test/cpp_headers/pipe.o 00:05:15.191 CXX test/cpp_headers/queue.o 00:05:15.191 CXX test/cpp_headers/reduce.o 00:05:15.191 CXX test/cpp_headers/rpc.o 00:05:15.191 CXX test/cpp_headers/scheduler.o 00:05:15.191 LINK env_dpdk_post_init 00:05:15.191 CXX test/cpp_headers/scsi.o 00:05:15.191 CXX test/cpp_headers/scsi_spec.o 00:05:15.191 CXX test/cpp_headers/sock.o 00:05:15.191 CXX test/cpp_headers/stdinc.o 00:05:15.191 CXX test/cpp_headers/string.o 00:05:15.191 CXX test/cpp_headers/thread.o 00:05:15.191 CXX test/cpp_headers/trace.o 00:05:15.191 CXX test/cpp_headers/trace_parser.o 00:05:15.191 CXX test/cpp_headers/tree.o 00:05:15.191 CXX test/cpp_headers/ublk.o 00:05:15.191 CXX test/cpp_headers/util.o 00:05:15.191 CXX test/cpp_headers/uuid.o 00:05:15.191 CXX test/cpp_headers/version.o 00:05:15.191 CXX test/cpp_headers/vfio_user_pci.o 00:05:15.191 CXX test/cpp_headers/vfio_user_spec.o 00:05:15.191 CXX test/cpp_headers/vhost.o 00:05:15.191 CXX test/cpp_headers/vmd.o 00:05:15.191 CXX test/cpp_headers/xor.o 00:05:15.191 CXX test/cpp_headers/zipf.o 00:05:15.191 LINK bdev_svc 00:05:15.191 LINK zipf 00:05:15.191 LINK histogram_perf 00:05:15.191 LINK vtophys 00:05:15.191 LINK poller_perf 00:05:15.191 LINK stub 00:05:15.191 LINK spdk_dd 00:05:15.191 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:05:15.450 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:05:15.450 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:05:15.450 LINK ioat_perf 00:05:15.450 LINK verify 00:05:15.450 LINK spdk_trace 00:05:15.450 LINK pci_ut 00:05:15.450 LINK spdk_nvme 00:05:15.708 LINK test_dma 00:05:15.708 LINK nvme_fuzz 00:05:15.708 LINK spdk_bdev 00:05:15.708 CC test/event/event_perf/event_perf.o 00:05:15.708 CC test/event/reactor_perf/reactor_perf.o 00:05:15.708 CC test/event/reactor/reactor.o 00:05:15.708 CC test/event/app_repeat/app_repeat.o 00:05:15.708 LINK spdk_nvme_identify 00:05:15.708 CC examples/idxd/perf/perf.o 00:05:15.708 LINK vhost_fuzz 00:05:15.708 LINK spdk_nvme_perf 00:05:15.708 CC test/event/scheduler/scheduler.o 00:05:15.708 CC examples/vmd/lsvmd/lsvmd.o 00:05:15.708 CC examples/sock/hello_world/hello_sock.o 00:05:15.708 CC examples/vmd/led/led.o 00:05:15.708 LINK mem_callbacks 00:05:15.708 CC examples/thread/thread/thread_ex.o 00:05:15.708 LINK spdk_top 00:05:15.966 LINK event_perf 00:05:15.966 LINK reactor_perf 00:05:15.966 CC app/vhost/vhost.o 00:05:15.966 LINK reactor 00:05:15.966 LINK app_repeat 00:05:15.966 LINK lsvmd 00:05:15.966 LINK led 00:05:15.966 LINK scheduler 00:05:15.966 LINK hello_sock 00:05:15.966 LINK thread 00:05:15.966 LINK idxd_perf 00:05:15.966 LINK vhost 00:05:16.224 CC test/blobfs/mkfs/mkfs.o 00:05:16.224 CC test/nvme/sgl/sgl.o 00:05:16.224 CC test/nvme/compliance/nvme_compliance.o 00:05:16.224 CC test/nvme/aer/aer.o 00:05:16.224 CC test/nvme/cuse/cuse.o 00:05:16.224 CC test/nvme/fdp/fdp.o 00:05:16.224 CC test/nvme/boot_partition/boot_partition.o 00:05:16.224 CC test/nvme/overhead/overhead.o 00:05:16.224 CC test/nvme/reset/reset.o 00:05:16.224 CC test/nvme/startup/startup.o 00:05:16.224 CC test/nvme/simple_copy/simple_copy.o 00:05:16.224 CC test/nvme/connect_stress/connect_stress.o 00:05:16.224 CC test/nvme/err_injection/err_injection.o 00:05:16.224 CC test/nvme/doorbell_aers/doorbell_aers.o 00:05:16.224 CC test/nvme/e2edp/nvme_dp.o 00:05:16.224 CC test/nvme/reserve/reserve.o 00:05:16.224 CC test/nvme/fused_ordering/fused_ordering.o 00:05:16.224 CC test/accel/dif/dif.o 00:05:16.224 LINK memory_ut 00:05:16.224 CC test/lvol/esnap/esnap.o 00:05:16.224 LINK boot_partition 00:05:16.224 LINK mkfs 00:05:16.224 LINK err_injection 00:05:16.224 LINK connect_stress 00:05:16.224 LINK startup 00:05:16.224 LINK doorbell_aers 00:05:16.483 LINK reserve 00:05:16.483 LINK fused_ordering 00:05:16.483 LINK sgl 00:05:16.483 LINK simple_copy 00:05:16.483 LINK reset 00:05:16.483 LINK overhead 00:05:16.483 LINK aer 00:05:16.483 LINK nvme_dp 00:05:16.483 LINK nvme_compliance 00:05:16.483 CC examples/nvme/hello_world/hello_world.o 00:05:16.483 CC examples/nvme/abort/abort.o 00:05:16.483 CC examples/nvme/reconnect/reconnect.o 00:05:16.483 LINK fdp 00:05:16.483 CC examples/nvme/arbitration/arbitration.o 00:05:16.483 CC examples/nvme/hotplug/hotplug.o 00:05:16.483 CC examples/nvme/cmb_copy/cmb_copy.o 00:05:16.483 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:05:16.483 CC examples/nvme/nvme_manage/nvme_manage.o 00:05:16.483 CC examples/blob/hello_world/hello_blob.o 00:05:16.483 CC examples/accel/perf/accel_perf.o 00:05:16.483 CC examples/blob/cli/blobcli.o 00:05:16.483 CC examples/fsdev/hello_world/hello_fsdev.o 00:05:16.741 LINK cmb_copy 00:05:16.741 LINK pmr_persistence 00:05:16.741 LINK hello_world 00:05:16.741 LINK hotplug 00:05:16.741 LINK dif 00:05:16.741 LINK arbitration 00:05:16.741 LINK reconnect 00:05:16.741 LINK iscsi_fuzz 00:05:16.741 LINK abort 00:05:16.741 LINK hello_blob 00:05:16.741 LINK nvme_manage 00:05:16.741 LINK hello_fsdev 00:05:17.001 LINK accel_perf 00:05:17.001 LINK blobcli 00:05:17.260 LINK cuse 00:05:17.260 CC test/bdev/bdevio/bdevio.o 00:05:17.518 CC examples/bdev/hello_world/hello_bdev.o 00:05:17.518 CC examples/bdev/bdevperf/bdevperf.o 00:05:17.518 LINK bdevio 00:05:17.777 LINK hello_bdev 00:05:18.035 LINK bdevperf 00:05:18.603 CC examples/nvmf/nvmf/nvmf.o 00:05:18.862 LINK nvmf 00:05:19.797 LINK esnap 00:05:20.055 00:05:20.055 real 0m55.559s 00:05:20.055 user 8m16.692s 00:05:20.055 sys 3m44.155s 00:05:20.055 08:02:33 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:05:20.055 08:02:33 make -- common/autotest_common.sh@10 -- $ set +x 00:05:20.055 ************************************ 00:05:20.055 END TEST make 00:05:20.055 ************************************ 00:05:20.055 08:02:33 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:05:20.055 08:02:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:05:20.055 08:02:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:05:20.055 08:02:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:20.055 08:02:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:05:20.055 08:02:33 -- pm/common@44 -- $ pid=1420868 00:05:20.055 08:02:33 -- pm/common@50 -- $ kill -TERM 1420868 00:05:20.055 08:02:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:20.055 08:02:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:05:20.055 08:02:33 -- pm/common@44 -- $ pid=1420869 00:05:20.055 08:02:33 -- pm/common@50 -- $ kill -TERM 1420869 00:05:20.055 08:02:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:20.055 08:02:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:05:20.055 08:02:33 -- pm/common@44 -- $ pid=1420872 00:05:20.055 08:02:33 -- pm/common@50 -- $ kill -TERM 1420872 00:05:20.055 08:02:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:05:20.056 08:02:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:05:20.056 08:02:33 -- pm/common@44 -- $ pid=1420895 00:05:20.056 08:02:33 -- pm/common@50 -- $ sudo -E kill -TERM 1420895 00:05:20.056 08:02:34 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:05:20.056 08:02:34 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:05:20.315 08:02:34 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:20.315 08:02:34 -- common/autotest_common.sh@1693 -- # lcov --version 00:05:20.315 08:02:34 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:20.315 08:02:34 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:20.315 08:02:34 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:20.315 08:02:34 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:20.315 08:02:34 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:20.315 08:02:34 -- scripts/common.sh@336 -- # IFS=.-: 00:05:20.315 08:02:34 -- scripts/common.sh@336 -- # read -ra ver1 00:05:20.315 08:02:34 -- scripts/common.sh@337 -- # IFS=.-: 00:05:20.315 08:02:34 -- scripts/common.sh@337 -- # read -ra ver2 00:05:20.315 08:02:34 -- scripts/common.sh@338 -- # local 'op=<' 00:05:20.315 08:02:34 -- scripts/common.sh@340 -- # ver1_l=2 00:05:20.315 08:02:34 -- scripts/common.sh@341 -- # ver2_l=1 00:05:20.315 08:02:34 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:20.315 08:02:34 -- scripts/common.sh@344 -- # case "$op" in 00:05:20.315 08:02:34 -- scripts/common.sh@345 -- # : 1 00:05:20.315 08:02:34 -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:20.315 08:02:34 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:20.315 08:02:34 -- scripts/common.sh@365 -- # decimal 1 00:05:20.315 08:02:34 -- scripts/common.sh@353 -- # local d=1 00:05:20.315 08:02:34 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:20.315 08:02:34 -- scripts/common.sh@355 -- # echo 1 00:05:20.315 08:02:34 -- scripts/common.sh@365 -- # ver1[v]=1 00:05:20.315 08:02:34 -- scripts/common.sh@366 -- # decimal 2 00:05:20.315 08:02:34 -- scripts/common.sh@353 -- # local d=2 00:05:20.315 08:02:34 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:20.315 08:02:34 -- scripts/common.sh@355 -- # echo 2 00:05:20.315 08:02:34 -- scripts/common.sh@366 -- # ver2[v]=2 00:05:20.315 08:02:34 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:20.315 08:02:34 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:20.315 08:02:34 -- scripts/common.sh@368 -- # return 0 00:05:20.315 08:02:34 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:20.315 08:02:34 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:20.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.315 --rc genhtml_branch_coverage=1 00:05:20.315 --rc genhtml_function_coverage=1 00:05:20.315 --rc genhtml_legend=1 00:05:20.315 --rc geninfo_all_blocks=1 00:05:20.315 --rc geninfo_unexecuted_blocks=1 00:05:20.315 00:05:20.315 ' 00:05:20.315 08:02:34 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:20.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.315 --rc genhtml_branch_coverage=1 00:05:20.315 --rc genhtml_function_coverage=1 00:05:20.315 --rc genhtml_legend=1 00:05:20.315 --rc geninfo_all_blocks=1 00:05:20.315 --rc geninfo_unexecuted_blocks=1 00:05:20.315 00:05:20.315 ' 00:05:20.315 08:02:34 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:20.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.315 --rc genhtml_branch_coverage=1 00:05:20.315 --rc genhtml_function_coverage=1 00:05:20.315 --rc genhtml_legend=1 00:05:20.315 --rc geninfo_all_blocks=1 00:05:20.315 --rc geninfo_unexecuted_blocks=1 00:05:20.315 00:05:20.315 ' 00:05:20.315 08:02:34 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:20.315 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:20.315 --rc genhtml_branch_coverage=1 00:05:20.315 --rc genhtml_function_coverage=1 00:05:20.315 --rc genhtml_legend=1 00:05:20.315 --rc geninfo_all_blocks=1 00:05:20.315 --rc geninfo_unexecuted_blocks=1 00:05:20.315 00:05:20.315 ' 00:05:20.315 08:02:34 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:20.315 08:02:34 -- nvmf/common.sh@7 -- # uname -s 00:05:20.315 08:02:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:20.315 08:02:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:20.315 08:02:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:20.315 08:02:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:20.315 08:02:34 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:20.315 08:02:34 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:05:20.315 08:02:34 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:20.315 08:02:34 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:05:20.315 08:02:34 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:20.315 08:02:34 -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:20.315 08:02:34 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:20.315 08:02:34 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:05:20.315 08:02:34 -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:05:20.315 08:02:34 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:20.315 08:02:34 -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:20.315 08:02:34 -- scripts/common.sh@15 -- # shopt -s extglob 00:05:20.315 08:02:34 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:20.315 08:02:34 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:20.315 08:02:34 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:20.315 08:02:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.315 08:02:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.315 08:02:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.315 08:02:34 -- paths/export.sh@5 -- # export PATH 00:05:20.315 08:02:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:20.315 08:02:34 -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:05:20.315 08:02:34 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:05:20.315 08:02:34 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:05:20.315 08:02:34 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:05:20.315 08:02:34 -- nvmf/common.sh@50 -- # : 0 00:05:20.315 08:02:34 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:05:20.315 08:02:34 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:05:20.315 08:02:34 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:05:20.315 08:02:34 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:20.315 08:02:34 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:20.315 08:02:34 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:05:20.315 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:05:20.315 08:02:34 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:05:20.315 08:02:34 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:05:20.315 08:02:34 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:05:20.315 08:02:34 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:05:20.315 08:02:34 -- spdk/autotest.sh@32 -- # uname -s 00:05:20.315 08:02:34 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:05:20.315 08:02:34 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:05:20.315 08:02:34 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:20.315 08:02:34 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:05:20.315 08:02:34 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:05:20.315 08:02:34 -- spdk/autotest.sh@44 -- # modprobe nbd 00:05:20.315 08:02:34 -- spdk/autotest.sh@46 -- # type -P udevadm 00:05:20.315 08:02:34 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:05:20.315 08:02:34 -- spdk/autotest.sh@48 -- # udevadm_pid=1483303 00:05:20.315 08:02:34 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:05:20.316 08:02:34 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:05:20.316 08:02:34 -- pm/common@17 -- # local monitor 00:05:20.316 08:02:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:20.316 08:02:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:20.316 08:02:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:20.316 08:02:34 -- pm/common@21 -- # date +%s 00:05:20.316 08:02:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:05:20.316 08:02:34 -- pm/common@21 -- # date +%s 00:05:20.316 08:02:34 -- pm/common@25 -- # sleep 1 00:05:20.316 08:02:34 -- pm/common@21 -- # date +%s 00:05:20.316 08:02:34 -- pm/common@21 -- # date +%s 00:05:20.316 08:02:34 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732086154 00:05:20.316 08:02:34 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732086154 00:05:20.316 08:02:34 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732086154 00:05:20.316 08:02:34 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732086154 00:05:20.316 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732086154_collect-cpu-load.pm.log 00:05:20.316 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732086154_collect-vmstat.pm.log 00:05:20.316 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732086154_collect-cpu-temp.pm.log 00:05:20.316 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732086154_collect-bmc-pm.bmc.pm.log 00:05:21.344 08:02:35 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:05:21.344 08:02:35 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:05:21.344 08:02:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:21.344 08:02:35 -- common/autotest_common.sh@10 -- # set +x 00:05:21.344 08:02:35 -- spdk/autotest.sh@59 -- # create_test_list 00:05:21.344 08:02:35 -- common/autotest_common.sh@752 -- # xtrace_disable 00:05:21.344 08:02:35 -- common/autotest_common.sh@10 -- # set +x 00:05:21.344 08:02:35 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:05:21.344 08:02:35 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:21.344 08:02:35 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:21.344 08:02:35 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:05:21.344 08:02:35 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:21.344 08:02:35 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:05:21.344 08:02:35 -- common/autotest_common.sh@1457 -- # uname 00:05:21.345 08:02:35 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:05:21.345 08:02:35 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:05:21.345 08:02:35 -- common/autotest_common.sh@1477 -- # uname 00:05:21.345 08:02:35 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:05:21.345 08:02:35 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:05:21.345 08:02:35 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:05:21.603 lcov: LCOV version 1.15 00:05:21.603 08:02:35 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:05:33.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:05:33.805 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:05:48.683 08:03:00 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:05:48.683 08:03:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:48.683 08:03:00 -- common/autotest_common.sh@10 -- # set +x 00:05:48.683 08:03:00 -- spdk/autotest.sh@78 -- # rm -f 00:05:48.683 08:03:00 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:49.251 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:05:49.251 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:05:49.251 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:05:49.251 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:05:49.251 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:05:49.251 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:05:49.251 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:05:49.251 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:05:49.251 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:05:49.251 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:05:49.251 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:05:49.251 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:05:49.251 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:05:49.251 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:05:49.510 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:05:49.510 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:05:49.510 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:05:49.510 08:03:03 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:05:49.510 08:03:03 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:05:49.510 08:03:03 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:05:49.510 08:03:03 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:05:49.510 08:03:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:05:49.510 08:03:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:05:49.510 08:03:03 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:05:49.510 08:03:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:49.510 08:03:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:05:49.510 08:03:03 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:05:49.510 08:03:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:05:49.510 08:03:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:05:49.510 08:03:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:05:49.510 08:03:03 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:05:49.510 08:03:03 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:05:49.510 No valid GPT data, bailing 00:05:49.510 08:03:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:49.510 08:03:03 -- scripts/common.sh@394 -- # pt= 00:05:49.510 08:03:03 -- scripts/common.sh@395 -- # return 1 00:05:49.510 08:03:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:05:49.510 1+0 records in 00:05:49.510 1+0 records out 00:05:49.510 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00144344 s, 726 MB/s 00:05:49.510 08:03:03 -- spdk/autotest.sh@105 -- # sync 00:05:49.510 08:03:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:05:49.510 08:03:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:05:49.510 08:03:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:05:56.080 08:03:08 -- spdk/autotest.sh@111 -- # uname -s 00:05:56.080 08:03:09 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:05:56.080 08:03:09 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:05:56.080 08:03:09 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:57.984 Hugepages 00:05:57.984 node hugesize free / total 00:05:57.984 node0 1048576kB 0 / 0 00:05:57.984 node0 2048kB 0 / 0 00:05:57.984 node1 1048576kB 0 / 0 00:05:57.984 node1 2048kB 0 / 0 00:05:57.984 00:05:57.984 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:57.984 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:05:57.984 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:05:57.984 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:05:57.984 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:05:57.984 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:05:57.984 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:05:57.985 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:05:57.985 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:05:57.985 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:05:57.985 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:05:57.985 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:05:57.985 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:05:57.985 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:05:57.985 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:05:57.985 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:05:57.985 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:05:57.985 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:05:57.985 08:03:11 -- spdk/autotest.sh@117 -- # uname -s 00:05:57.985 08:03:12 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:05:57.985 08:03:12 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:05:57.985 08:03:12 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:01.273 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:01.273 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:01.273 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:01.273 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:01.273 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:01.273 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:01.273 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:01.273 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:01.273 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:01.273 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:01.273 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:01.273 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:01.273 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:01.273 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:01.273 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:01.273 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:02.648 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:06:02.648 08:03:16 -- common/autotest_common.sh@1517 -- # sleep 1 00:06:03.583 08:03:17 -- common/autotest_common.sh@1518 -- # bdfs=() 00:06:03.583 08:03:17 -- common/autotest_common.sh@1518 -- # local bdfs 00:06:03.583 08:03:17 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:06:03.583 08:03:17 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:06:03.583 08:03:17 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:03.583 08:03:17 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:03.583 08:03:17 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:03.583 08:03:17 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:03.583 08:03:17 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:03.841 08:03:17 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:03.841 08:03:17 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:06:03.841 08:03:17 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:06:06.373 Waiting for block devices as requested 00:06:06.632 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:06:06.632 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:06.632 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:06.891 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:06.891 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:06.891 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:07.149 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:07.149 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:07.149 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:07.408 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:06:07.408 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:06:07.408 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:06:07.408 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:06:07.666 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:06:07.666 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:06:07.666 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:06:07.925 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:06:07.925 08:03:21 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:06:07.925 08:03:21 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:06:07.925 08:03:21 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:06:07.925 08:03:21 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:06:07.925 08:03:21 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:06:07.925 08:03:21 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:06:07.925 08:03:21 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:06:07.925 08:03:21 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:06:07.925 08:03:21 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:06:07.925 08:03:21 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:06:07.925 08:03:21 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:06:07.925 08:03:21 -- common/autotest_common.sh@1531 -- # grep oacs 00:06:07.925 08:03:21 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:06:07.925 08:03:21 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:06:07.925 08:03:21 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:06:07.925 08:03:21 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:06:07.925 08:03:21 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:06:07.925 08:03:21 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:06:07.925 08:03:21 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:06:07.925 08:03:21 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:06:07.925 08:03:21 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:06:07.925 08:03:21 -- common/autotest_common.sh@1543 -- # continue 00:06:07.925 08:03:21 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:06:07.925 08:03:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:07.925 08:03:21 -- common/autotest_common.sh@10 -- # set +x 00:06:07.925 08:03:21 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:06:07.925 08:03:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:07.925 08:03:21 -- common/autotest_common.sh@10 -- # set +x 00:06:07.925 08:03:21 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:06:11.211 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:11.211 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:11.211 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:11.211 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:11.211 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:11.211 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:11.211 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:11.211 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:11.211 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:06:11.212 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:06:11.212 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:06:11.212 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:06:11.212 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:06:11.212 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:06:11.212 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:06:11.212 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:06:12.586 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:06:12.586 08:03:26 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:06:12.586 08:03:26 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:12.586 08:03:26 -- common/autotest_common.sh@10 -- # set +x 00:06:12.586 08:03:26 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:06:12.586 08:03:26 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:06:12.586 08:03:26 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:06:12.586 08:03:26 -- common/autotest_common.sh@1563 -- # bdfs=() 00:06:12.586 08:03:26 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:06:12.586 08:03:26 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:06:12.586 08:03:26 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:06:12.586 08:03:26 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:06:12.586 08:03:26 -- common/autotest_common.sh@1498 -- # bdfs=() 00:06:12.586 08:03:26 -- common/autotest_common.sh@1498 -- # local bdfs 00:06:12.586 08:03:26 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:06:12.586 08:03:26 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:06:12.586 08:03:26 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:06:12.586 08:03:26 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:06:12.586 08:03:26 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:06:12.586 08:03:26 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:06:12.586 08:03:26 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:06:12.586 08:03:26 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:06:12.586 08:03:26 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:06:12.586 08:03:26 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:06:12.586 08:03:26 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:06:12.586 08:03:26 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:06:12.586 08:03:26 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:06:12.586 08:03:26 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=1498128 00:06:12.586 08:03:26 -- common/autotest_common.sh@1585 -- # waitforlisten 1498128 00:06:12.586 08:03:26 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:12.586 08:03:26 -- common/autotest_common.sh@835 -- # '[' -z 1498128 ']' 00:06:12.586 08:03:26 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.586 08:03:26 -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.586 08:03:26 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.586 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.586 08:03:26 -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.586 08:03:26 -- common/autotest_common.sh@10 -- # set +x 00:06:12.586 [2024-11-20 08:03:26.545233] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:06:12.586 [2024-11-20 08:03:26.545283] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1498128 ] 00:06:12.845 [2024-11-20 08:03:26.620617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.845 [2024-11-20 08:03:26.662931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.103 08:03:26 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.103 08:03:26 -- common/autotest_common.sh@868 -- # return 0 00:06:13.103 08:03:26 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:06:13.103 08:03:26 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:06:13.103 08:03:26 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:06:16.383 nvme0n1 00:06:16.383 08:03:29 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:06:16.383 [2024-11-20 08:03:30.052968] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:06:16.383 request: 00:06:16.383 { 00:06:16.383 "nvme_ctrlr_name": "nvme0", 00:06:16.383 "password": "test", 00:06:16.383 "method": "bdev_nvme_opal_revert", 00:06:16.383 "req_id": 1 00:06:16.383 } 00:06:16.383 Got JSON-RPC error response 00:06:16.383 response: 00:06:16.383 { 00:06:16.383 "code": -32602, 00:06:16.383 "message": "Invalid parameters" 00:06:16.383 } 00:06:16.383 08:03:30 -- common/autotest_common.sh@1591 -- # true 00:06:16.383 08:03:30 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:06:16.383 08:03:30 -- common/autotest_common.sh@1595 -- # killprocess 1498128 00:06:16.383 08:03:30 -- common/autotest_common.sh@954 -- # '[' -z 1498128 ']' 00:06:16.383 08:03:30 -- common/autotest_common.sh@958 -- # kill -0 1498128 00:06:16.383 08:03:30 -- common/autotest_common.sh@959 -- # uname 00:06:16.383 08:03:30 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.383 08:03:30 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1498128 00:06:16.383 08:03:30 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.383 08:03:30 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.383 08:03:30 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1498128' 00:06:16.383 killing process with pid 1498128 00:06:16.383 08:03:30 -- common/autotest_common.sh@973 -- # kill 1498128 00:06:16.383 08:03:30 -- common/autotest_common.sh@978 -- # wait 1498128 00:06:18.283 08:03:32 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:06:18.283 08:03:32 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:06:18.283 08:03:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:18.541 08:03:32 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:06:18.541 08:03:32 -- spdk/autotest.sh@149 -- # timing_enter lib 00:06:18.541 08:03:32 -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:18.541 08:03:32 -- common/autotest_common.sh@10 -- # set +x 00:06:18.541 08:03:32 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:06:18.541 08:03:32 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:18.541 08:03:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.541 08:03:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.541 08:03:32 -- common/autotest_common.sh@10 -- # set +x 00:06:18.541 ************************************ 00:06:18.541 START TEST env 00:06:18.541 ************************************ 00:06:18.541 08:03:32 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:06:18.541 * Looking for test storage... 00:06:18.541 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:06:18.541 08:03:32 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:18.541 08:03:32 env -- common/autotest_common.sh@1693 -- # lcov --version 00:06:18.541 08:03:32 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:18.541 08:03:32 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:18.541 08:03:32 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:18.541 08:03:32 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:18.541 08:03:32 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:18.541 08:03:32 env -- scripts/common.sh@336 -- # IFS=.-: 00:06:18.541 08:03:32 env -- scripts/common.sh@336 -- # read -ra ver1 00:06:18.541 08:03:32 env -- scripts/common.sh@337 -- # IFS=.-: 00:06:18.541 08:03:32 env -- scripts/common.sh@337 -- # read -ra ver2 00:06:18.541 08:03:32 env -- scripts/common.sh@338 -- # local 'op=<' 00:06:18.541 08:03:32 env -- scripts/common.sh@340 -- # ver1_l=2 00:06:18.541 08:03:32 env -- scripts/common.sh@341 -- # ver2_l=1 00:06:18.541 08:03:32 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:18.541 08:03:32 env -- scripts/common.sh@344 -- # case "$op" in 00:06:18.541 08:03:32 env -- scripts/common.sh@345 -- # : 1 00:06:18.541 08:03:32 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:18.541 08:03:32 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:18.541 08:03:32 env -- scripts/common.sh@365 -- # decimal 1 00:06:18.541 08:03:32 env -- scripts/common.sh@353 -- # local d=1 00:06:18.541 08:03:32 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:18.541 08:03:32 env -- scripts/common.sh@355 -- # echo 1 00:06:18.541 08:03:32 env -- scripts/common.sh@365 -- # ver1[v]=1 00:06:18.541 08:03:32 env -- scripts/common.sh@366 -- # decimal 2 00:06:18.541 08:03:32 env -- scripts/common.sh@353 -- # local d=2 00:06:18.541 08:03:32 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:18.541 08:03:32 env -- scripts/common.sh@355 -- # echo 2 00:06:18.541 08:03:32 env -- scripts/common.sh@366 -- # ver2[v]=2 00:06:18.541 08:03:32 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:18.541 08:03:32 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:18.541 08:03:32 env -- scripts/common.sh@368 -- # return 0 00:06:18.541 08:03:32 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:18.542 08:03:32 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:18.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.542 --rc genhtml_branch_coverage=1 00:06:18.542 --rc genhtml_function_coverage=1 00:06:18.542 --rc genhtml_legend=1 00:06:18.542 --rc geninfo_all_blocks=1 00:06:18.542 --rc geninfo_unexecuted_blocks=1 00:06:18.542 00:06:18.542 ' 00:06:18.542 08:03:32 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:18.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.542 --rc genhtml_branch_coverage=1 00:06:18.542 --rc genhtml_function_coverage=1 00:06:18.542 --rc genhtml_legend=1 00:06:18.542 --rc geninfo_all_blocks=1 00:06:18.542 --rc geninfo_unexecuted_blocks=1 00:06:18.542 00:06:18.542 ' 00:06:18.542 08:03:32 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:18.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.542 --rc genhtml_branch_coverage=1 00:06:18.542 --rc genhtml_function_coverage=1 00:06:18.542 --rc genhtml_legend=1 00:06:18.542 --rc geninfo_all_blocks=1 00:06:18.542 --rc geninfo_unexecuted_blocks=1 00:06:18.542 00:06:18.542 ' 00:06:18.542 08:03:32 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:18.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:18.542 --rc genhtml_branch_coverage=1 00:06:18.542 --rc genhtml_function_coverage=1 00:06:18.542 --rc genhtml_legend=1 00:06:18.542 --rc geninfo_all_blocks=1 00:06:18.542 --rc geninfo_unexecuted_blocks=1 00:06:18.542 00:06:18.542 ' 00:06:18.542 08:03:32 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:18.542 08:03:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.542 08:03:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.542 08:03:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.542 ************************************ 00:06:18.542 START TEST env_memory 00:06:18.542 ************************************ 00:06:18.542 08:03:32 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:06:18.801 00:06:18.801 00:06:18.801 CUnit - A unit testing framework for C - Version 2.1-3 00:06:18.801 http://cunit.sourceforge.net/ 00:06:18.801 00:06:18.801 00:06:18.801 Suite: memory 00:06:18.801 Test: alloc and free memory map ...[2024-11-20 08:03:32.600454] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:06:18.801 passed 00:06:18.801 Test: mem map translation ...[2024-11-20 08:03:32.618539] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:06:18.801 [2024-11-20 08:03:32.618554] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:06:18.801 [2024-11-20 08:03:32.618586] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:06:18.801 [2024-11-20 08:03:32.618592] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:06:18.801 passed 00:06:18.801 Test: mem map registration ...[2024-11-20 08:03:32.654106] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:06:18.801 [2024-11-20 08:03:32.654126] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:06:18.801 passed 00:06:18.801 Test: mem map adjacent registrations ...passed 00:06:18.801 00:06:18.801 Run Summary: Type Total Ran Passed Failed Inactive 00:06:18.801 suites 1 1 n/a 0 0 00:06:18.801 tests 4 4 4 0 0 00:06:18.801 asserts 152 152 152 0 n/a 00:06:18.801 00:06:18.801 Elapsed time = 0.134 seconds 00:06:18.801 00:06:18.801 real 0m0.147s 00:06:18.801 user 0m0.139s 00:06:18.801 sys 0m0.008s 00:06:18.801 08:03:32 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:18.801 08:03:32 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:06:18.801 ************************************ 00:06:18.801 END TEST env_memory 00:06:18.801 ************************************ 00:06:18.801 08:03:32 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:18.801 08:03:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:18.801 08:03:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:18.801 08:03:32 env -- common/autotest_common.sh@10 -- # set +x 00:06:18.801 ************************************ 00:06:18.801 START TEST env_vtophys 00:06:18.801 ************************************ 00:06:18.802 08:03:32 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:06:18.802 EAL: lib.eal log level changed from notice to debug 00:06:18.802 EAL: Detected lcore 0 as core 0 on socket 0 00:06:18.802 EAL: Detected lcore 1 as core 1 on socket 0 00:06:18.802 EAL: Detected lcore 2 as core 2 on socket 0 00:06:18.802 EAL: Detected lcore 3 as core 3 on socket 0 00:06:18.802 EAL: Detected lcore 4 as core 4 on socket 0 00:06:18.802 EAL: Detected lcore 5 as core 5 on socket 0 00:06:18.802 EAL: Detected lcore 6 as core 6 on socket 0 00:06:18.802 EAL: Detected lcore 7 as core 8 on socket 0 00:06:18.802 EAL: Detected lcore 8 as core 9 on socket 0 00:06:18.802 EAL: Detected lcore 9 as core 10 on socket 0 00:06:18.802 EAL: Detected lcore 10 as core 11 on socket 0 00:06:18.802 EAL: Detected lcore 11 as core 12 on socket 0 00:06:18.802 EAL: Detected lcore 12 as core 13 on socket 0 00:06:18.802 EAL: Detected lcore 13 as core 16 on socket 0 00:06:18.802 EAL: Detected lcore 14 as core 17 on socket 0 00:06:18.802 EAL: Detected lcore 15 as core 18 on socket 0 00:06:18.802 EAL: Detected lcore 16 as core 19 on socket 0 00:06:18.802 EAL: Detected lcore 17 as core 20 on socket 0 00:06:18.802 EAL: Detected lcore 18 as core 21 on socket 0 00:06:18.802 EAL: Detected lcore 19 as core 25 on socket 0 00:06:18.802 EAL: Detected lcore 20 as core 26 on socket 0 00:06:18.802 EAL: Detected lcore 21 as core 27 on socket 0 00:06:18.802 EAL: Detected lcore 22 as core 28 on socket 0 00:06:18.802 EAL: Detected lcore 23 as core 29 on socket 0 00:06:18.802 EAL: Detected lcore 24 as core 0 on socket 1 00:06:18.802 EAL: Detected lcore 25 as core 1 on socket 1 00:06:18.802 EAL: Detected lcore 26 as core 2 on socket 1 00:06:18.802 EAL: Detected lcore 27 as core 3 on socket 1 00:06:18.802 EAL: Detected lcore 28 as core 4 on socket 1 00:06:18.802 EAL: Detected lcore 29 as core 5 on socket 1 00:06:18.802 EAL: Detected lcore 30 as core 6 on socket 1 00:06:18.802 EAL: Detected lcore 31 as core 8 on socket 1 00:06:18.802 EAL: Detected lcore 32 as core 10 on socket 1 00:06:18.802 EAL: Detected lcore 33 as core 11 on socket 1 00:06:18.802 EAL: Detected lcore 34 as core 12 on socket 1 00:06:18.802 EAL: Detected lcore 35 as core 13 on socket 1 00:06:18.802 EAL: Detected lcore 36 as core 16 on socket 1 00:06:18.802 EAL: Detected lcore 37 as core 17 on socket 1 00:06:18.802 EAL: Detected lcore 38 as core 18 on socket 1 00:06:18.802 EAL: Detected lcore 39 as core 19 on socket 1 00:06:18.802 EAL: Detected lcore 40 as core 20 on socket 1 00:06:18.802 EAL: Detected lcore 41 as core 21 on socket 1 00:06:18.802 EAL: Detected lcore 42 as core 24 on socket 1 00:06:18.802 EAL: Detected lcore 43 as core 25 on socket 1 00:06:18.802 EAL: Detected lcore 44 as core 26 on socket 1 00:06:18.802 EAL: Detected lcore 45 as core 27 on socket 1 00:06:18.802 EAL: Detected lcore 46 as core 28 on socket 1 00:06:18.802 EAL: Detected lcore 47 as core 29 on socket 1 00:06:18.802 EAL: Detected lcore 48 as core 0 on socket 0 00:06:18.802 EAL: Detected lcore 49 as core 1 on socket 0 00:06:18.802 EAL: Detected lcore 50 as core 2 on socket 0 00:06:18.802 EAL: Detected lcore 51 as core 3 on socket 0 00:06:18.802 EAL: Detected lcore 52 as core 4 on socket 0 00:06:18.802 EAL: Detected lcore 53 as core 5 on socket 0 00:06:18.802 EAL: Detected lcore 54 as core 6 on socket 0 00:06:18.802 EAL: Detected lcore 55 as core 8 on socket 0 00:06:18.802 EAL: Detected lcore 56 as core 9 on socket 0 00:06:18.802 EAL: Detected lcore 57 as core 10 on socket 0 00:06:18.802 EAL: Detected lcore 58 as core 11 on socket 0 00:06:18.802 EAL: Detected lcore 59 as core 12 on socket 0 00:06:18.802 EAL: Detected lcore 60 as core 13 on socket 0 00:06:18.802 EAL: Detected lcore 61 as core 16 on socket 0 00:06:18.802 EAL: Detected lcore 62 as core 17 on socket 0 00:06:18.802 EAL: Detected lcore 63 as core 18 on socket 0 00:06:18.802 EAL: Detected lcore 64 as core 19 on socket 0 00:06:18.802 EAL: Detected lcore 65 as core 20 on socket 0 00:06:18.802 EAL: Detected lcore 66 as core 21 on socket 0 00:06:18.802 EAL: Detected lcore 67 as core 25 on socket 0 00:06:18.802 EAL: Detected lcore 68 as core 26 on socket 0 00:06:18.802 EAL: Detected lcore 69 as core 27 on socket 0 00:06:18.802 EAL: Detected lcore 70 as core 28 on socket 0 00:06:18.802 EAL: Detected lcore 71 as core 29 on socket 0 00:06:18.802 EAL: Detected lcore 72 as core 0 on socket 1 00:06:18.802 EAL: Detected lcore 73 as core 1 on socket 1 00:06:18.802 EAL: Detected lcore 74 as core 2 on socket 1 00:06:18.802 EAL: Detected lcore 75 as core 3 on socket 1 00:06:18.802 EAL: Detected lcore 76 as core 4 on socket 1 00:06:18.802 EAL: Detected lcore 77 as core 5 on socket 1 00:06:18.802 EAL: Detected lcore 78 as core 6 on socket 1 00:06:18.802 EAL: Detected lcore 79 as core 8 on socket 1 00:06:18.802 EAL: Detected lcore 80 as core 10 on socket 1 00:06:18.802 EAL: Detected lcore 81 as core 11 on socket 1 00:06:18.802 EAL: Detected lcore 82 as core 12 on socket 1 00:06:18.802 EAL: Detected lcore 83 as core 13 on socket 1 00:06:18.802 EAL: Detected lcore 84 as core 16 on socket 1 00:06:18.802 EAL: Detected lcore 85 as core 17 on socket 1 00:06:18.802 EAL: Detected lcore 86 as core 18 on socket 1 00:06:18.802 EAL: Detected lcore 87 as core 19 on socket 1 00:06:18.802 EAL: Detected lcore 88 as core 20 on socket 1 00:06:18.802 EAL: Detected lcore 89 as core 21 on socket 1 00:06:18.802 EAL: Detected lcore 90 as core 24 on socket 1 00:06:18.802 EAL: Detected lcore 91 as core 25 on socket 1 00:06:18.802 EAL: Detected lcore 92 as core 26 on socket 1 00:06:18.802 EAL: Detected lcore 93 as core 27 on socket 1 00:06:18.802 EAL: Detected lcore 94 as core 28 on socket 1 00:06:18.802 EAL: Detected lcore 95 as core 29 on socket 1 00:06:18.802 EAL: Maximum logical cores by configuration: 128 00:06:18.802 EAL: Detected CPU lcores: 96 00:06:18.802 EAL: Detected NUMA nodes: 2 00:06:18.802 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:06:18.802 EAL: Detected shared linkage of DPDK 00:06:18.802 EAL: No shared files mode enabled, IPC will be disabled 00:06:18.802 EAL: Bus pci wants IOVA as 'DC' 00:06:18.802 EAL: Buses did not request a specific IOVA mode. 00:06:18.802 EAL: IOMMU is available, selecting IOVA as VA mode. 00:06:18.802 EAL: Selected IOVA mode 'VA' 00:06:18.802 EAL: Probing VFIO support... 00:06:18.802 EAL: IOMMU type 1 (Type 1) is supported 00:06:18.802 EAL: IOMMU type 7 (sPAPR) is not supported 00:06:18.802 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:06:18.802 EAL: VFIO support initialized 00:06:18.802 EAL: Ask a virtual area of 0x2e000 bytes 00:06:18.802 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:06:18.802 EAL: Setting up physically contiguous memory... 00:06:18.802 EAL: Setting maximum number of open files to 524288 00:06:18.802 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:06:18.802 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:06:18.802 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:06:18.802 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.802 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:06:18.802 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.802 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.802 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:06:18.802 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:06:18.802 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.802 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:06:18.802 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.802 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.802 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:06:18.802 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:06:18.802 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.802 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:06:18.802 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.802 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.802 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:06:18.802 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:06:18.802 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.802 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:06:18.802 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:06:18.802 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.802 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:06:18.802 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:06:18.802 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:06:18.802 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.802 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:06:18.802 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:18.802 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.802 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:06:18.802 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:06:18.802 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.802 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:06:18.802 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:18.802 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.802 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:06:18.802 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:06:18.802 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.802 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:06:18.802 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:18.802 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.802 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:06:18.802 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:06:18.802 EAL: Ask a virtual area of 0x61000 bytes 00:06:18.802 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:06:18.802 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:06:18.802 EAL: Ask a virtual area of 0x400000000 bytes 00:06:18.802 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:06:18.802 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:06:18.802 EAL: Hugepages will be freed exactly as allocated. 00:06:18.802 EAL: No shared files mode enabled, IPC is disabled 00:06:18.802 EAL: No shared files mode enabled, IPC is disabled 00:06:18.802 EAL: TSC frequency is ~2100000 KHz 00:06:18.802 EAL: Main lcore 0 is ready (tid=7f4a4d596a00;cpuset=[0]) 00:06:18.802 EAL: Trying to obtain current memory policy. 00:06:18.802 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:18.802 EAL: Restoring previous memory policy: 0 00:06:18.802 EAL: request: mp_malloc_sync 00:06:18.802 EAL: No shared files mode enabled, IPC is disabled 00:06:18.802 EAL: Heap on socket 0 was expanded by 2MB 00:06:18.803 EAL: No shared files mode enabled, IPC is disabled 00:06:19.062 EAL: No PCI address specified using 'addr=' in: bus=pci 00:06:19.062 EAL: Mem event callback 'spdk:(nil)' registered 00:06:19.062 00:06:19.062 00:06:19.062 CUnit - A unit testing framework for C - Version 2.1-3 00:06:19.062 http://cunit.sourceforge.net/ 00:06:19.062 00:06:19.062 00:06:19.062 Suite: components_suite 00:06:19.062 Test: vtophys_malloc_test ...passed 00:06:19.062 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:06:19.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.062 EAL: Restoring previous memory policy: 4 00:06:19.062 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.062 EAL: request: mp_malloc_sync 00:06:19.062 EAL: No shared files mode enabled, IPC is disabled 00:06:19.062 EAL: Heap on socket 0 was expanded by 4MB 00:06:19.062 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.062 EAL: request: mp_malloc_sync 00:06:19.062 EAL: No shared files mode enabled, IPC is disabled 00:06:19.062 EAL: Heap on socket 0 was shrunk by 4MB 00:06:19.062 EAL: Trying to obtain current memory policy. 00:06:19.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.062 EAL: Restoring previous memory policy: 4 00:06:19.062 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.062 EAL: request: mp_malloc_sync 00:06:19.062 EAL: No shared files mode enabled, IPC is disabled 00:06:19.062 EAL: Heap on socket 0 was expanded by 6MB 00:06:19.062 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.062 EAL: request: mp_malloc_sync 00:06:19.062 EAL: No shared files mode enabled, IPC is disabled 00:06:19.062 EAL: Heap on socket 0 was shrunk by 6MB 00:06:19.062 EAL: Trying to obtain current memory policy. 00:06:19.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.062 EAL: Restoring previous memory policy: 4 00:06:19.062 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.062 EAL: request: mp_malloc_sync 00:06:19.062 EAL: No shared files mode enabled, IPC is disabled 00:06:19.062 EAL: Heap on socket 0 was expanded by 10MB 00:06:19.062 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.062 EAL: request: mp_malloc_sync 00:06:19.062 EAL: No shared files mode enabled, IPC is disabled 00:06:19.062 EAL: Heap on socket 0 was shrunk by 10MB 00:06:19.062 EAL: Trying to obtain current memory policy. 00:06:19.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.062 EAL: Restoring previous memory policy: 4 00:06:19.062 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.062 EAL: request: mp_malloc_sync 00:06:19.062 EAL: No shared files mode enabled, IPC is disabled 00:06:19.062 EAL: Heap on socket 0 was expanded by 18MB 00:06:19.062 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.062 EAL: request: mp_malloc_sync 00:06:19.062 EAL: No shared files mode enabled, IPC is disabled 00:06:19.062 EAL: Heap on socket 0 was shrunk by 18MB 00:06:19.062 EAL: Trying to obtain current memory policy. 00:06:19.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.062 EAL: Restoring previous memory policy: 4 00:06:19.062 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.062 EAL: request: mp_malloc_sync 00:06:19.062 EAL: No shared files mode enabled, IPC is disabled 00:06:19.062 EAL: Heap on socket 0 was expanded by 34MB 00:06:19.062 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.062 EAL: request: mp_malloc_sync 00:06:19.062 EAL: No shared files mode enabled, IPC is disabled 00:06:19.062 EAL: Heap on socket 0 was shrunk by 34MB 00:06:19.062 EAL: Trying to obtain current memory policy. 00:06:19.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.062 EAL: Restoring previous memory policy: 4 00:06:19.062 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.062 EAL: request: mp_malloc_sync 00:06:19.062 EAL: No shared files mode enabled, IPC is disabled 00:06:19.062 EAL: Heap on socket 0 was expanded by 66MB 00:06:19.062 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.062 EAL: request: mp_malloc_sync 00:06:19.062 EAL: No shared files mode enabled, IPC is disabled 00:06:19.062 EAL: Heap on socket 0 was shrunk by 66MB 00:06:19.062 EAL: Trying to obtain current memory policy. 00:06:19.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.062 EAL: Restoring previous memory policy: 4 00:06:19.062 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.062 EAL: request: mp_malloc_sync 00:06:19.062 EAL: No shared files mode enabled, IPC is disabled 00:06:19.062 EAL: Heap on socket 0 was expanded by 130MB 00:06:19.062 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.062 EAL: request: mp_malloc_sync 00:06:19.062 EAL: No shared files mode enabled, IPC is disabled 00:06:19.062 EAL: Heap on socket 0 was shrunk by 130MB 00:06:19.062 EAL: Trying to obtain current memory policy. 00:06:19.062 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.062 EAL: Restoring previous memory policy: 4 00:06:19.062 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.062 EAL: request: mp_malloc_sync 00:06:19.062 EAL: No shared files mode enabled, IPC is disabled 00:06:19.062 EAL: Heap on socket 0 was expanded by 258MB 00:06:19.062 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.321 EAL: request: mp_malloc_sync 00:06:19.321 EAL: No shared files mode enabled, IPC is disabled 00:06:19.321 EAL: Heap on socket 0 was shrunk by 258MB 00:06:19.321 EAL: Trying to obtain current memory policy. 00:06:19.321 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.321 EAL: Restoring previous memory policy: 4 00:06:19.321 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.321 EAL: request: mp_malloc_sync 00:06:19.321 EAL: No shared files mode enabled, IPC is disabled 00:06:19.321 EAL: Heap on socket 0 was expanded by 514MB 00:06:19.321 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.579 EAL: request: mp_malloc_sync 00:06:19.579 EAL: No shared files mode enabled, IPC is disabled 00:06:19.579 EAL: Heap on socket 0 was shrunk by 514MB 00:06:19.579 EAL: Trying to obtain current memory policy. 00:06:19.579 EAL: Setting policy MPOL_PREFERRED for socket 0 00:06:19.579 EAL: Restoring previous memory policy: 4 00:06:19.579 EAL: Calling mem event callback 'spdk:(nil)' 00:06:19.579 EAL: request: mp_malloc_sync 00:06:19.579 EAL: No shared files mode enabled, IPC is disabled 00:06:19.579 EAL: Heap on socket 0 was expanded by 1026MB 00:06:19.838 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.097 EAL: request: mp_malloc_sync 00:06:20.097 EAL: No shared files mode enabled, IPC is disabled 00:06:20.097 EAL: Heap on socket 0 was shrunk by 1026MB 00:06:20.097 passed 00:06:20.097 00:06:20.097 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.097 suites 1 1 n/a 0 0 00:06:20.097 tests 2 2 2 0 0 00:06:20.097 asserts 497 497 497 0 n/a 00:06:20.097 00:06:20.097 Elapsed time = 0.970 seconds 00:06:20.097 EAL: Calling mem event callback 'spdk:(nil)' 00:06:20.097 EAL: request: mp_malloc_sync 00:06:20.097 EAL: No shared files mode enabled, IPC is disabled 00:06:20.097 EAL: Heap on socket 0 was shrunk by 2MB 00:06:20.097 EAL: No shared files mode enabled, IPC is disabled 00:06:20.097 EAL: No shared files mode enabled, IPC is disabled 00:06:20.097 EAL: No shared files mode enabled, IPC is disabled 00:06:20.097 00:06:20.097 real 0m1.100s 00:06:20.097 user 0m0.643s 00:06:20.097 sys 0m0.432s 00:06:20.097 08:03:33 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.097 08:03:33 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:06:20.097 ************************************ 00:06:20.097 END TEST env_vtophys 00:06:20.097 ************************************ 00:06:20.097 08:03:33 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:20.097 08:03:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:20.097 08:03:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.097 08:03:33 env -- common/autotest_common.sh@10 -- # set +x 00:06:20.097 ************************************ 00:06:20.097 START TEST env_pci 00:06:20.097 ************************************ 00:06:20.097 08:03:33 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:06:20.097 00:06:20.097 00:06:20.097 CUnit - A unit testing framework for C - Version 2.1-3 00:06:20.097 http://cunit.sourceforge.net/ 00:06:20.097 00:06:20.097 00:06:20.097 Suite: pci 00:06:20.097 Test: pci_hook ...[2024-11-20 08:03:33.954741] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 1499451 has claimed it 00:06:20.097 EAL: Cannot find device (10000:00:01.0) 00:06:20.097 EAL: Failed to attach device on primary process 00:06:20.097 passed 00:06:20.097 00:06:20.097 Run Summary: Type Total Ran Passed Failed Inactive 00:06:20.097 suites 1 1 n/a 0 0 00:06:20.097 tests 1 1 1 0 0 00:06:20.097 asserts 25 25 25 0 n/a 00:06:20.097 00:06:20.097 Elapsed time = 0.026 seconds 00:06:20.097 00:06:20.097 real 0m0.045s 00:06:20.097 user 0m0.014s 00:06:20.097 sys 0m0.031s 00:06:20.097 08:03:33 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.097 08:03:33 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:06:20.097 ************************************ 00:06:20.097 END TEST env_pci 00:06:20.097 ************************************ 00:06:20.097 08:03:34 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:06:20.097 08:03:34 env -- env/env.sh@15 -- # uname 00:06:20.097 08:03:34 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:06:20.097 08:03:34 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:06:20.097 08:03:34 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:20.097 08:03:34 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:20.097 08:03:34 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.097 08:03:34 env -- common/autotest_common.sh@10 -- # set +x 00:06:20.097 ************************************ 00:06:20.097 START TEST env_dpdk_post_init 00:06:20.097 ************************************ 00:06:20.097 08:03:34 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:06:20.097 EAL: Detected CPU lcores: 96 00:06:20.097 EAL: Detected NUMA nodes: 2 00:06:20.097 EAL: Detected shared linkage of DPDK 00:06:20.097 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:20.097 EAL: Selected IOVA mode 'VA' 00:06:20.097 EAL: VFIO support initialized 00:06:20.097 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:20.357 EAL: Using IOMMU type 1 (Type 1) 00:06:20.357 EAL: Ignore mapping IO port bar(1) 00:06:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:06:20.357 EAL: Ignore mapping IO port bar(1) 00:06:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:06:20.357 EAL: Ignore mapping IO port bar(1) 00:06:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:06:20.357 EAL: Ignore mapping IO port bar(1) 00:06:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:06:20.357 EAL: Ignore mapping IO port bar(1) 00:06:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:06:20.357 EAL: Ignore mapping IO port bar(1) 00:06:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:06:20.357 EAL: Ignore mapping IO port bar(1) 00:06:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:06:20.357 EAL: Ignore mapping IO port bar(1) 00:06:20.357 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:06:21.420 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:06:21.420 EAL: Ignore mapping IO port bar(1) 00:06:21.420 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:06:21.420 EAL: Ignore mapping IO port bar(1) 00:06:21.420 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:06:21.420 EAL: Ignore mapping IO port bar(1) 00:06:21.420 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:06:21.420 EAL: Ignore mapping IO port bar(1) 00:06:21.420 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:06:21.420 EAL: Ignore mapping IO port bar(1) 00:06:21.420 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:06:21.420 EAL: Ignore mapping IO port bar(1) 00:06:21.420 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:06:21.420 EAL: Ignore mapping IO port bar(1) 00:06:21.420 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:06:21.420 EAL: Ignore mapping IO port bar(1) 00:06:21.420 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:06:24.802 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:06:24.802 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:06:25.060 Starting DPDK initialization... 00:06:25.060 Starting SPDK post initialization... 00:06:25.060 SPDK NVMe probe 00:06:25.060 Attaching to 0000:5e:00.0 00:06:25.060 Attached to 0000:5e:00.0 00:06:25.060 Cleaning up... 00:06:25.060 00:06:25.060 real 0m4.937s 00:06:25.060 user 0m3.509s 00:06:25.060 sys 0m0.500s 00:06:25.060 08:03:38 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.060 08:03:38 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:06:25.060 ************************************ 00:06:25.060 END TEST env_dpdk_post_init 00:06:25.060 ************************************ 00:06:25.061 08:03:39 env -- env/env.sh@26 -- # uname 00:06:25.061 08:03:39 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:06:25.061 08:03:39 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:25.061 08:03:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.061 08:03:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.061 08:03:39 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.061 ************************************ 00:06:25.061 START TEST env_mem_callbacks 00:06:25.061 ************************************ 00:06:25.061 08:03:39 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:06:25.318 EAL: Detected CPU lcores: 96 00:06:25.318 EAL: Detected NUMA nodes: 2 00:06:25.318 EAL: Detected shared linkage of DPDK 00:06:25.318 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:06:25.318 EAL: Selected IOVA mode 'VA' 00:06:25.318 EAL: VFIO support initialized 00:06:25.318 TELEMETRY: No legacy callbacks, legacy socket not created 00:06:25.318 00:06:25.318 00:06:25.318 CUnit - A unit testing framework for C - Version 2.1-3 00:06:25.318 http://cunit.sourceforge.net/ 00:06:25.318 00:06:25.318 00:06:25.318 Suite: memory 00:06:25.318 Test: test ... 00:06:25.318 register 0x200000200000 2097152 00:06:25.318 malloc 3145728 00:06:25.318 register 0x200000400000 4194304 00:06:25.318 buf 0x200000500000 len 3145728 PASSED 00:06:25.318 malloc 64 00:06:25.318 buf 0x2000004fff40 len 64 PASSED 00:06:25.318 malloc 4194304 00:06:25.318 register 0x200000800000 6291456 00:06:25.318 buf 0x200000a00000 len 4194304 PASSED 00:06:25.318 free 0x200000500000 3145728 00:06:25.318 free 0x2000004fff40 64 00:06:25.318 unregister 0x200000400000 4194304 PASSED 00:06:25.318 free 0x200000a00000 4194304 00:06:25.318 unregister 0x200000800000 6291456 PASSED 00:06:25.318 malloc 8388608 00:06:25.318 register 0x200000400000 10485760 00:06:25.318 buf 0x200000600000 len 8388608 PASSED 00:06:25.318 free 0x200000600000 8388608 00:06:25.318 unregister 0x200000400000 10485760 PASSED 00:06:25.318 passed 00:06:25.318 00:06:25.318 Run Summary: Type Total Ran Passed Failed Inactive 00:06:25.318 suites 1 1 n/a 0 0 00:06:25.318 tests 1 1 1 0 0 00:06:25.318 asserts 15 15 15 0 n/a 00:06:25.318 00:06:25.318 Elapsed time = 0.008 seconds 00:06:25.318 00:06:25.318 real 0m0.059s 00:06:25.318 user 0m0.018s 00:06:25.318 sys 0m0.040s 00:06:25.318 08:03:39 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.318 08:03:39 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:06:25.318 ************************************ 00:06:25.318 END TEST env_mem_callbacks 00:06:25.318 ************************************ 00:06:25.318 00:06:25.318 real 0m6.814s 00:06:25.318 user 0m4.556s 00:06:25.318 sys 0m1.339s 00:06:25.318 08:03:39 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.318 08:03:39 env -- common/autotest_common.sh@10 -- # set +x 00:06:25.318 ************************************ 00:06:25.318 END TEST env 00:06:25.318 ************************************ 00:06:25.318 08:03:39 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:25.318 08:03:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.318 08:03:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.318 08:03:39 -- common/autotest_common.sh@10 -- # set +x 00:06:25.318 ************************************ 00:06:25.318 START TEST rpc 00:06:25.318 ************************************ 00:06:25.318 08:03:39 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:06:25.318 * Looking for test storage... 00:06:25.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:25.318 08:03:39 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:25.318 08:03:39 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:25.318 08:03:39 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:25.576 08:03:39 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:25.576 08:03:39 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.576 08:03:39 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.576 08:03:39 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.576 08:03:39 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.576 08:03:39 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.576 08:03:39 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.576 08:03:39 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.576 08:03:39 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.576 08:03:39 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.576 08:03:39 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.576 08:03:39 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.576 08:03:39 rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:25.576 08:03:39 rpc -- scripts/common.sh@345 -- # : 1 00:06:25.576 08:03:39 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.576 08:03:39 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.576 08:03:39 rpc -- scripts/common.sh@365 -- # decimal 1 00:06:25.576 08:03:39 rpc -- scripts/common.sh@353 -- # local d=1 00:06:25.576 08:03:39 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.576 08:03:39 rpc -- scripts/common.sh@355 -- # echo 1 00:06:25.576 08:03:39 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.576 08:03:39 rpc -- scripts/common.sh@366 -- # decimal 2 00:06:25.576 08:03:39 rpc -- scripts/common.sh@353 -- # local d=2 00:06:25.576 08:03:39 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.576 08:03:39 rpc -- scripts/common.sh@355 -- # echo 2 00:06:25.576 08:03:39 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.576 08:03:39 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.576 08:03:39 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.576 08:03:39 rpc -- scripts/common.sh@368 -- # return 0 00:06:25.576 08:03:39 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.576 08:03:39 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:25.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.576 --rc genhtml_branch_coverage=1 00:06:25.576 --rc genhtml_function_coverage=1 00:06:25.576 --rc genhtml_legend=1 00:06:25.576 --rc geninfo_all_blocks=1 00:06:25.576 --rc geninfo_unexecuted_blocks=1 00:06:25.576 00:06:25.576 ' 00:06:25.576 08:03:39 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:25.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.576 --rc genhtml_branch_coverage=1 00:06:25.576 --rc genhtml_function_coverage=1 00:06:25.576 --rc genhtml_legend=1 00:06:25.576 --rc geninfo_all_blocks=1 00:06:25.576 --rc geninfo_unexecuted_blocks=1 00:06:25.576 00:06:25.576 ' 00:06:25.576 08:03:39 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:25.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.576 --rc genhtml_branch_coverage=1 00:06:25.576 --rc genhtml_function_coverage=1 00:06:25.576 --rc genhtml_legend=1 00:06:25.576 --rc geninfo_all_blocks=1 00:06:25.576 --rc geninfo_unexecuted_blocks=1 00:06:25.576 00:06:25.576 ' 00:06:25.576 08:03:39 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:25.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.576 --rc genhtml_branch_coverage=1 00:06:25.576 --rc genhtml_function_coverage=1 00:06:25.576 --rc genhtml_legend=1 00:06:25.576 --rc geninfo_all_blocks=1 00:06:25.576 --rc geninfo_unexecuted_blocks=1 00:06:25.576 00:06:25.576 ' 00:06:25.576 08:03:39 rpc -- rpc/rpc.sh@65 -- # spdk_pid=1500512 00:06:25.576 08:03:39 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:25.576 08:03:39 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:06:25.576 08:03:39 rpc -- rpc/rpc.sh@67 -- # waitforlisten 1500512 00:06:25.576 08:03:39 rpc -- common/autotest_common.sh@835 -- # '[' -z 1500512 ']' 00:06:25.576 08:03:39 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.576 08:03:39 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.576 08:03:39 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.576 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.576 08:03:39 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.576 08:03:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.576 [2024-11-20 08:03:39.459753] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:06:25.577 [2024-11-20 08:03:39.459799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1500512 ] 00:06:25.577 [2024-11-20 08:03:39.531908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.577 [2024-11-20 08:03:39.573083] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:06:25.577 [2024-11-20 08:03:39.573120] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 1500512' to capture a snapshot of events at runtime. 00:06:25.577 [2024-11-20 08:03:39.573127] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:25.577 [2024-11-20 08:03:39.573133] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:25.577 [2024-11-20 08:03:39.573138] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid1500512 for offline analysis/debug. 00:06:25.577 [2024-11-20 08:03:39.573715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.835 08:03:39 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:25.835 08:03:39 rpc -- common/autotest_common.sh@868 -- # return 0 00:06:25.835 08:03:39 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:25.835 08:03:39 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:25.835 08:03:39 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:06:25.835 08:03:39 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:06:25.835 08:03:39 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:25.835 08:03:39 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.835 08:03:39 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:25.835 ************************************ 00:06:25.835 START TEST rpc_integrity 00:06:25.835 ************************************ 00:06:25.835 08:03:39 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:25.835 08:03:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:25.835 08:03:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:25.835 08:03:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:25.835 08:03:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:25.835 08:03:39 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:25.835 08:03:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:26.095 08:03:39 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:26.095 08:03:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:26.095 08:03:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.095 08:03:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.095 08:03:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.095 08:03:39 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:06:26.095 08:03:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:26.095 08:03:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.095 08:03:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.095 08:03:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.095 08:03:39 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:26.095 { 00:06:26.095 "name": "Malloc0", 00:06:26.095 "aliases": [ 00:06:26.095 "1eaf34d8-3296-4cdf-aaf3-19fe405289f7" 00:06:26.095 ], 00:06:26.095 "product_name": "Malloc disk", 00:06:26.095 "block_size": 512, 00:06:26.095 "num_blocks": 16384, 00:06:26.095 "uuid": "1eaf34d8-3296-4cdf-aaf3-19fe405289f7", 00:06:26.095 "assigned_rate_limits": { 00:06:26.095 "rw_ios_per_sec": 0, 00:06:26.095 "rw_mbytes_per_sec": 0, 00:06:26.095 "r_mbytes_per_sec": 0, 00:06:26.095 "w_mbytes_per_sec": 0 00:06:26.095 }, 00:06:26.095 "claimed": false, 00:06:26.095 "zoned": false, 00:06:26.095 "supported_io_types": { 00:06:26.095 "read": true, 00:06:26.095 "write": true, 00:06:26.095 "unmap": true, 00:06:26.095 "flush": true, 00:06:26.095 "reset": true, 00:06:26.095 "nvme_admin": false, 00:06:26.095 "nvme_io": false, 00:06:26.095 "nvme_io_md": false, 00:06:26.095 "write_zeroes": true, 00:06:26.095 "zcopy": true, 00:06:26.095 "get_zone_info": false, 00:06:26.095 "zone_management": false, 00:06:26.095 "zone_append": false, 00:06:26.095 "compare": false, 00:06:26.095 "compare_and_write": false, 00:06:26.095 "abort": true, 00:06:26.095 "seek_hole": false, 00:06:26.095 "seek_data": false, 00:06:26.095 "copy": true, 00:06:26.095 "nvme_iov_md": false 00:06:26.095 }, 00:06:26.095 "memory_domains": [ 00:06:26.095 { 00:06:26.095 "dma_device_id": "system", 00:06:26.095 "dma_device_type": 1 00:06:26.095 }, 00:06:26.095 { 00:06:26.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.095 "dma_device_type": 2 00:06:26.095 } 00:06:26.095 ], 00:06:26.095 "driver_specific": {} 00:06:26.095 } 00:06:26.095 ]' 00:06:26.095 08:03:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:26.095 08:03:39 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:26.095 08:03:39 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:06:26.095 08:03:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.095 08:03:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.095 [2024-11-20 08:03:39.958883] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:06:26.095 [2024-11-20 08:03:39.958910] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:26.095 [2024-11-20 08:03:39.958923] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x9b76e0 00:06:26.095 [2024-11-20 08:03:39.958929] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:26.095 [2024-11-20 08:03:39.959999] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:26.095 [2024-11-20 08:03:39.960021] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:26.095 Passthru0 00:06:26.095 08:03:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.095 08:03:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:26.095 08:03:39 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.095 08:03:39 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.095 08:03:39 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.095 08:03:39 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:26.095 { 00:06:26.095 "name": "Malloc0", 00:06:26.095 "aliases": [ 00:06:26.095 "1eaf34d8-3296-4cdf-aaf3-19fe405289f7" 00:06:26.095 ], 00:06:26.095 "product_name": "Malloc disk", 00:06:26.095 "block_size": 512, 00:06:26.095 "num_blocks": 16384, 00:06:26.095 "uuid": "1eaf34d8-3296-4cdf-aaf3-19fe405289f7", 00:06:26.095 "assigned_rate_limits": { 00:06:26.095 "rw_ios_per_sec": 0, 00:06:26.095 "rw_mbytes_per_sec": 0, 00:06:26.095 "r_mbytes_per_sec": 0, 00:06:26.095 "w_mbytes_per_sec": 0 00:06:26.095 }, 00:06:26.095 "claimed": true, 00:06:26.095 "claim_type": "exclusive_write", 00:06:26.095 "zoned": false, 00:06:26.095 "supported_io_types": { 00:06:26.095 "read": true, 00:06:26.095 "write": true, 00:06:26.095 "unmap": true, 00:06:26.095 "flush": true, 00:06:26.095 "reset": true, 00:06:26.095 "nvme_admin": false, 00:06:26.095 "nvme_io": false, 00:06:26.095 "nvme_io_md": false, 00:06:26.095 "write_zeroes": true, 00:06:26.095 "zcopy": true, 00:06:26.095 "get_zone_info": false, 00:06:26.095 "zone_management": false, 00:06:26.095 "zone_append": false, 00:06:26.095 "compare": false, 00:06:26.095 "compare_and_write": false, 00:06:26.095 "abort": true, 00:06:26.095 "seek_hole": false, 00:06:26.095 "seek_data": false, 00:06:26.095 "copy": true, 00:06:26.095 "nvme_iov_md": false 00:06:26.095 }, 00:06:26.095 "memory_domains": [ 00:06:26.095 { 00:06:26.095 "dma_device_id": "system", 00:06:26.095 "dma_device_type": 1 00:06:26.095 }, 00:06:26.095 { 00:06:26.095 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.095 "dma_device_type": 2 00:06:26.095 } 00:06:26.095 ], 00:06:26.095 "driver_specific": {} 00:06:26.095 }, 00:06:26.095 { 00:06:26.095 "name": "Passthru0", 00:06:26.095 "aliases": [ 00:06:26.095 "aff34667-45ad-5852-8241-e22c4fcd2f3c" 00:06:26.095 ], 00:06:26.095 "product_name": "passthru", 00:06:26.095 "block_size": 512, 00:06:26.095 "num_blocks": 16384, 00:06:26.095 "uuid": "aff34667-45ad-5852-8241-e22c4fcd2f3c", 00:06:26.095 "assigned_rate_limits": { 00:06:26.095 "rw_ios_per_sec": 0, 00:06:26.095 "rw_mbytes_per_sec": 0, 00:06:26.095 "r_mbytes_per_sec": 0, 00:06:26.095 "w_mbytes_per_sec": 0 00:06:26.095 }, 00:06:26.095 "claimed": false, 00:06:26.095 "zoned": false, 00:06:26.095 "supported_io_types": { 00:06:26.095 "read": true, 00:06:26.095 "write": true, 00:06:26.095 "unmap": true, 00:06:26.095 "flush": true, 00:06:26.095 "reset": true, 00:06:26.095 "nvme_admin": false, 00:06:26.095 "nvme_io": false, 00:06:26.095 "nvme_io_md": false, 00:06:26.095 "write_zeroes": true, 00:06:26.095 "zcopy": true, 00:06:26.095 "get_zone_info": false, 00:06:26.095 "zone_management": false, 00:06:26.095 "zone_append": false, 00:06:26.095 "compare": false, 00:06:26.095 "compare_and_write": false, 00:06:26.095 "abort": true, 00:06:26.095 "seek_hole": false, 00:06:26.095 "seek_data": false, 00:06:26.095 "copy": true, 00:06:26.095 "nvme_iov_md": false 00:06:26.095 }, 00:06:26.095 "memory_domains": [ 00:06:26.095 { 00:06:26.095 "dma_device_id": "system", 00:06:26.095 "dma_device_type": 1 00:06:26.095 }, 00:06:26.095 { 00:06:26.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.096 "dma_device_type": 2 00:06:26.096 } 00:06:26.096 ], 00:06:26.096 "driver_specific": { 00:06:26.096 "passthru": { 00:06:26.096 "name": "Passthru0", 00:06:26.096 "base_bdev_name": "Malloc0" 00:06:26.096 } 00:06:26.096 } 00:06:26.096 } 00:06:26.096 ]' 00:06:26.096 08:03:39 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:26.096 08:03:40 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:26.096 08:03:40 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:26.096 08:03:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.096 08:03:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.096 08:03:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.096 08:03:40 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:06:26.096 08:03:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.096 08:03:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.096 08:03:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.096 08:03:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:26.096 08:03:40 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.096 08:03:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.096 08:03:40 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.096 08:03:40 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:26.096 08:03:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:26.096 08:03:40 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:26.096 00:06:26.096 real 0m0.278s 00:06:26.096 user 0m0.176s 00:06:26.096 sys 0m0.039s 00:06:26.096 08:03:40 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.096 08:03:40 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.096 ************************************ 00:06:26.096 END TEST rpc_integrity 00:06:26.096 ************************************ 00:06:26.354 08:03:40 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:06:26.354 08:03:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.354 08:03:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.354 08:03:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.355 ************************************ 00:06:26.355 START TEST rpc_plugins 00:06:26.355 ************************************ 00:06:26.355 08:03:40 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:06:26.355 08:03:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:06:26.355 08:03:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.355 08:03:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:26.355 08:03:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.355 08:03:40 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:06:26.355 08:03:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:06:26.355 08:03:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.355 08:03:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:26.355 08:03:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.355 08:03:40 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:06:26.355 { 00:06:26.355 "name": "Malloc1", 00:06:26.355 "aliases": [ 00:06:26.355 "eeaefc93-414a-4884-b427-a54db9580e93" 00:06:26.355 ], 00:06:26.355 "product_name": "Malloc disk", 00:06:26.355 "block_size": 4096, 00:06:26.355 "num_blocks": 256, 00:06:26.355 "uuid": "eeaefc93-414a-4884-b427-a54db9580e93", 00:06:26.355 "assigned_rate_limits": { 00:06:26.355 "rw_ios_per_sec": 0, 00:06:26.355 "rw_mbytes_per_sec": 0, 00:06:26.355 "r_mbytes_per_sec": 0, 00:06:26.355 "w_mbytes_per_sec": 0 00:06:26.355 }, 00:06:26.355 "claimed": false, 00:06:26.355 "zoned": false, 00:06:26.355 "supported_io_types": { 00:06:26.355 "read": true, 00:06:26.355 "write": true, 00:06:26.355 "unmap": true, 00:06:26.355 "flush": true, 00:06:26.355 "reset": true, 00:06:26.355 "nvme_admin": false, 00:06:26.355 "nvme_io": false, 00:06:26.355 "nvme_io_md": false, 00:06:26.355 "write_zeroes": true, 00:06:26.355 "zcopy": true, 00:06:26.355 "get_zone_info": false, 00:06:26.355 "zone_management": false, 00:06:26.355 "zone_append": false, 00:06:26.355 "compare": false, 00:06:26.355 "compare_and_write": false, 00:06:26.355 "abort": true, 00:06:26.355 "seek_hole": false, 00:06:26.355 "seek_data": false, 00:06:26.355 "copy": true, 00:06:26.355 "nvme_iov_md": false 00:06:26.355 }, 00:06:26.355 "memory_domains": [ 00:06:26.355 { 00:06:26.355 "dma_device_id": "system", 00:06:26.355 "dma_device_type": 1 00:06:26.355 }, 00:06:26.355 { 00:06:26.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.355 "dma_device_type": 2 00:06:26.355 } 00:06:26.355 ], 00:06:26.355 "driver_specific": {} 00:06:26.355 } 00:06:26.355 ]' 00:06:26.355 08:03:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:06:26.355 08:03:40 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:06:26.355 08:03:40 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:06:26.355 08:03:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.355 08:03:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:26.355 08:03:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.355 08:03:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:06:26.355 08:03:40 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.355 08:03:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:26.355 08:03:40 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.355 08:03:40 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:06:26.355 08:03:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:06:26.355 08:03:40 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:06:26.355 00:06:26.355 real 0m0.144s 00:06:26.355 user 0m0.090s 00:06:26.355 sys 0m0.017s 00:06:26.355 08:03:40 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.355 08:03:40 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:06:26.355 ************************************ 00:06:26.355 END TEST rpc_plugins 00:06:26.355 ************************************ 00:06:26.355 08:03:40 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:06:26.355 08:03:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.355 08:03:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.355 08:03:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.614 ************************************ 00:06:26.614 START TEST rpc_trace_cmd_test 00:06:26.614 ************************************ 00:06:26.614 08:03:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:06:26.614 08:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:06:26.614 08:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:06:26.614 08:03:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.614 08:03:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.614 08:03:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.614 08:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:06:26.614 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid1500512", 00:06:26.614 "tpoint_group_mask": "0x8", 00:06:26.614 "iscsi_conn": { 00:06:26.614 "mask": "0x2", 00:06:26.614 "tpoint_mask": "0x0" 00:06:26.614 }, 00:06:26.614 "scsi": { 00:06:26.614 "mask": "0x4", 00:06:26.614 "tpoint_mask": "0x0" 00:06:26.614 }, 00:06:26.614 "bdev": { 00:06:26.614 "mask": "0x8", 00:06:26.614 "tpoint_mask": "0xffffffffffffffff" 00:06:26.614 }, 00:06:26.614 "nvmf_rdma": { 00:06:26.614 "mask": "0x10", 00:06:26.614 "tpoint_mask": "0x0" 00:06:26.614 }, 00:06:26.614 "nvmf_tcp": { 00:06:26.614 "mask": "0x20", 00:06:26.614 "tpoint_mask": "0x0" 00:06:26.614 }, 00:06:26.614 "ftl": { 00:06:26.614 "mask": "0x40", 00:06:26.614 "tpoint_mask": "0x0" 00:06:26.614 }, 00:06:26.614 "blobfs": { 00:06:26.614 "mask": "0x80", 00:06:26.614 "tpoint_mask": "0x0" 00:06:26.614 }, 00:06:26.614 "dsa": { 00:06:26.614 "mask": "0x200", 00:06:26.614 "tpoint_mask": "0x0" 00:06:26.614 }, 00:06:26.614 "thread": { 00:06:26.614 "mask": "0x400", 00:06:26.614 "tpoint_mask": "0x0" 00:06:26.614 }, 00:06:26.614 "nvme_pcie": { 00:06:26.614 "mask": "0x800", 00:06:26.614 "tpoint_mask": "0x0" 00:06:26.614 }, 00:06:26.614 "iaa": { 00:06:26.614 "mask": "0x1000", 00:06:26.614 "tpoint_mask": "0x0" 00:06:26.614 }, 00:06:26.614 "nvme_tcp": { 00:06:26.614 "mask": "0x2000", 00:06:26.614 "tpoint_mask": "0x0" 00:06:26.614 }, 00:06:26.614 "bdev_nvme": { 00:06:26.614 "mask": "0x4000", 00:06:26.614 "tpoint_mask": "0x0" 00:06:26.614 }, 00:06:26.614 "sock": { 00:06:26.614 "mask": "0x8000", 00:06:26.614 "tpoint_mask": "0x0" 00:06:26.614 }, 00:06:26.614 "blob": { 00:06:26.614 "mask": "0x10000", 00:06:26.614 "tpoint_mask": "0x0" 00:06:26.614 }, 00:06:26.614 "bdev_raid": { 00:06:26.614 "mask": "0x20000", 00:06:26.614 "tpoint_mask": "0x0" 00:06:26.614 }, 00:06:26.614 "scheduler": { 00:06:26.614 "mask": "0x40000", 00:06:26.614 "tpoint_mask": "0x0" 00:06:26.614 } 00:06:26.614 }' 00:06:26.614 08:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:06:26.614 08:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:06:26.614 08:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:06:26.614 08:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:06:26.614 08:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:06:26.614 08:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:06:26.614 08:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:06:26.614 08:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:06:26.614 08:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:06:26.614 08:03:40 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:06:26.614 00:06:26.614 real 0m0.212s 00:06:26.614 user 0m0.177s 00:06:26.614 sys 0m0.027s 00:06:26.614 08:03:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.614 08:03:40 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:06:26.614 ************************************ 00:06:26.614 END TEST rpc_trace_cmd_test 00:06:26.614 ************************************ 00:06:26.614 08:03:40 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:06:26.614 08:03:40 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:06:26.614 08:03:40 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:06:26.614 08:03:40 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.614 08:03:40 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.614 08:03:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:26.874 ************************************ 00:06:26.874 START TEST rpc_daemon_integrity 00:06:26.874 ************************************ 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:06:26.874 { 00:06:26.874 "name": "Malloc2", 00:06:26.874 "aliases": [ 00:06:26.874 "5af778fd-c339-465b-9644-c0a11cb73c0e" 00:06:26.874 ], 00:06:26.874 "product_name": "Malloc disk", 00:06:26.874 "block_size": 512, 00:06:26.874 "num_blocks": 16384, 00:06:26.874 "uuid": "5af778fd-c339-465b-9644-c0a11cb73c0e", 00:06:26.874 "assigned_rate_limits": { 00:06:26.874 "rw_ios_per_sec": 0, 00:06:26.874 "rw_mbytes_per_sec": 0, 00:06:26.874 "r_mbytes_per_sec": 0, 00:06:26.874 "w_mbytes_per_sec": 0 00:06:26.874 }, 00:06:26.874 "claimed": false, 00:06:26.874 "zoned": false, 00:06:26.874 "supported_io_types": { 00:06:26.874 "read": true, 00:06:26.874 "write": true, 00:06:26.874 "unmap": true, 00:06:26.874 "flush": true, 00:06:26.874 "reset": true, 00:06:26.874 "nvme_admin": false, 00:06:26.874 "nvme_io": false, 00:06:26.874 "nvme_io_md": false, 00:06:26.874 "write_zeroes": true, 00:06:26.874 "zcopy": true, 00:06:26.874 "get_zone_info": false, 00:06:26.874 "zone_management": false, 00:06:26.874 "zone_append": false, 00:06:26.874 "compare": false, 00:06:26.874 "compare_and_write": false, 00:06:26.874 "abort": true, 00:06:26.874 "seek_hole": false, 00:06:26.874 "seek_data": false, 00:06:26.874 "copy": true, 00:06:26.874 "nvme_iov_md": false 00:06:26.874 }, 00:06:26.874 "memory_domains": [ 00:06:26.874 { 00:06:26.874 "dma_device_id": "system", 00:06:26.874 "dma_device_type": 1 00:06:26.874 }, 00:06:26.874 { 00:06:26.874 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.874 "dma_device_type": 2 00:06:26.874 } 00:06:26.874 ], 00:06:26.874 "driver_specific": {} 00:06:26.874 } 00:06:26.874 ]' 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.874 [2024-11-20 08:03:40.801174] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:06:26.874 [2024-11-20 08:03:40.801205] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:26.874 [2024-11-20 08:03:40.801219] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xa47b70 00:06:26.874 [2024-11-20 08:03:40.801225] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:26.874 [2024-11-20 08:03:40.802187] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:26.874 [2024-11-20 08:03:40.802213] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:06:26.874 Passthru0 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.874 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:06:26.874 { 00:06:26.874 "name": "Malloc2", 00:06:26.874 "aliases": [ 00:06:26.874 "5af778fd-c339-465b-9644-c0a11cb73c0e" 00:06:26.874 ], 00:06:26.874 "product_name": "Malloc disk", 00:06:26.874 "block_size": 512, 00:06:26.874 "num_blocks": 16384, 00:06:26.874 "uuid": "5af778fd-c339-465b-9644-c0a11cb73c0e", 00:06:26.874 "assigned_rate_limits": { 00:06:26.874 "rw_ios_per_sec": 0, 00:06:26.874 "rw_mbytes_per_sec": 0, 00:06:26.874 "r_mbytes_per_sec": 0, 00:06:26.874 "w_mbytes_per_sec": 0 00:06:26.874 }, 00:06:26.874 "claimed": true, 00:06:26.874 "claim_type": "exclusive_write", 00:06:26.874 "zoned": false, 00:06:26.874 "supported_io_types": { 00:06:26.874 "read": true, 00:06:26.874 "write": true, 00:06:26.874 "unmap": true, 00:06:26.874 "flush": true, 00:06:26.874 "reset": true, 00:06:26.874 "nvme_admin": false, 00:06:26.874 "nvme_io": false, 00:06:26.875 "nvme_io_md": false, 00:06:26.875 "write_zeroes": true, 00:06:26.875 "zcopy": true, 00:06:26.875 "get_zone_info": false, 00:06:26.875 "zone_management": false, 00:06:26.875 "zone_append": false, 00:06:26.875 "compare": false, 00:06:26.875 "compare_and_write": false, 00:06:26.875 "abort": true, 00:06:26.875 "seek_hole": false, 00:06:26.875 "seek_data": false, 00:06:26.875 "copy": true, 00:06:26.875 "nvme_iov_md": false 00:06:26.875 }, 00:06:26.875 "memory_domains": [ 00:06:26.875 { 00:06:26.875 "dma_device_id": "system", 00:06:26.875 "dma_device_type": 1 00:06:26.875 }, 00:06:26.875 { 00:06:26.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.875 "dma_device_type": 2 00:06:26.875 } 00:06:26.875 ], 00:06:26.875 "driver_specific": {} 00:06:26.875 }, 00:06:26.875 { 00:06:26.875 "name": "Passthru0", 00:06:26.875 "aliases": [ 00:06:26.875 "112dc15d-cf8a-57a8-b24b-cb73636da5ed" 00:06:26.875 ], 00:06:26.875 "product_name": "passthru", 00:06:26.875 "block_size": 512, 00:06:26.875 "num_blocks": 16384, 00:06:26.875 "uuid": "112dc15d-cf8a-57a8-b24b-cb73636da5ed", 00:06:26.875 "assigned_rate_limits": { 00:06:26.875 "rw_ios_per_sec": 0, 00:06:26.875 "rw_mbytes_per_sec": 0, 00:06:26.875 "r_mbytes_per_sec": 0, 00:06:26.875 "w_mbytes_per_sec": 0 00:06:26.875 }, 00:06:26.875 "claimed": false, 00:06:26.875 "zoned": false, 00:06:26.875 "supported_io_types": { 00:06:26.875 "read": true, 00:06:26.875 "write": true, 00:06:26.875 "unmap": true, 00:06:26.875 "flush": true, 00:06:26.875 "reset": true, 00:06:26.875 "nvme_admin": false, 00:06:26.875 "nvme_io": false, 00:06:26.875 "nvme_io_md": false, 00:06:26.875 "write_zeroes": true, 00:06:26.875 "zcopy": true, 00:06:26.875 "get_zone_info": false, 00:06:26.875 "zone_management": false, 00:06:26.875 "zone_append": false, 00:06:26.875 "compare": false, 00:06:26.875 "compare_and_write": false, 00:06:26.875 "abort": true, 00:06:26.875 "seek_hole": false, 00:06:26.875 "seek_data": false, 00:06:26.875 "copy": true, 00:06:26.875 "nvme_iov_md": false 00:06:26.875 }, 00:06:26.875 "memory_domains": [ 00:06:26.875 { 00:06:26.875 "dma_device_id": "system", 00:06:26.875 "dma_device_type": 1 00:06:26.875 }, 00:06:26.875 { 00:06:26.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:26.875 "dma_device_type": 2 00:06:26.875 } 00:06:26.875 ], 00:06:26.875 "driver_specific": { 00:06:26.875 "passthru": { 00:06:26.875 "name": "Passthru0", 00:06:26.875 "base_bdev_name": "Malloc2" 00:06:26.875 } 00:06:26.875 } 00:06:26.875 } 00:06:26.875 ]' 00:06:26.875 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:06:26.875 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:06:26.875 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:06:26.875 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.875 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.875 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.875 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:06:26.875 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.875 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.875 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.875 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:06:26.875 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:26.875 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:26.875 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:26.875 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:06:26.875 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:06:27.134 08:03:40 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:06:27.134 00:06:27.134 real 0m0.272s 00:06:27.134 user 0m0.167s 00:06:27.134 sys 0m0.039s 00:06:27.134 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.134 08:03:40 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:06:27.134 ************************************ 00:06:27.134 END TEST rpc_daemon_integrity 00:06:27.134 ************************************ 00:06:27.134 08:03:40 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:06:27.134 08:03:40 rpc -- rpc/rpc.sh@84 -- # killprocess 1500512 00:06:27.134 08:03:40 rpc -- common/autotest_common.sh@954 -- # '[' -z 1500512 ']' 00:06:27.134 08:03:40 rpc -- common/autotest_common.sh@958 -- # kill -0 1500512 00:06:27.134 08:03:40 rpc -- common/autotest_common.sh@959 -- # uname 00:06:27.134 08:03:40 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.134 08:03:40 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1500512 00:06:27.134 08:03:41 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.134 08:03:41 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.134 08:03:41 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1500512' 00:06:27.134 killing process with pid 1500512 00:06:27.134 08:03:41 rpc -- common/autotest_common.sh@973 -- # kill 1500512 00:06:27.134 08:03:41 rpc -- common/autotest_common.sh@978 -- # wait 1500512 00:06:27.392 00:06:27.392 real 0m2.078s 00:06:27.392 user 0m2.658s 00:06:27.392 sys 0m0.683s 00:06:27.392 08:03:41 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.393 08:03:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.393 ************************************ 00:06:27.393 END TEST rpc 00:06:27.393 ************************************ 00:06:27.393 08:03:41 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:27.393 08:03:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.393 08:03:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.393 08:03:41 -- common/autotest_common.sh@10 -- # set +x 00:06:27.393 ************************************ 00:06:27.393 START TEST skip_rpc 00:06:27.393 ************************************ 00:06:27.393 08:03:41 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:06:27.652 * Looking for test storage... 00:06:27.652 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:06:27.652 08:03:41 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:27.652 08:03:41 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:27.652 08:03:41 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:27.652 08:03:41 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@345 -- # : 1 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.652 08:03:41 skip_rpc -- scripts/common.sh@368 -- # return 0 00:06:27.652 08:03:41 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.652 08:03:41 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:27.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.652 --rc genhtml_branch_coverage=1 00:06:27.652 --rc genhtml_function_coverage=1 00:06:27.652 --rc genhtml_legend=1 00:06:27.652 --rc geninfo_all_blocks=1 00:06:27.652 --rc geninfo_unexecuted_blocks=1 00:06:27.652 00:06:27.652 ' 00:06:27.652 08:03:41 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:27.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.652 --rc genhtml_branch_coverage=1 00:06:27.652 --rc genhtml_function_coverage=1 00:06:27.652 --rc genhtml_legend=1 00:06:27.652 --rc geninfo_all_blocks=1 00:06:27.652 --rc geninfo_unexecuted_blocks=1 00:06:27.652 00:06:27.652 ' 00:06:27.652 08:03:41 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:27.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.652 --rc genhtml_branch_coverage=1 00:06:27.652 --rc genhtml_function_coverage=1 00:06:27.652 --rc genhtml_legend=1 00:06:27.652 --rc geninfo_all_blocks=1 00:06:27.652 --rc geninfo_unexecuted_blocks=1 00:06:27.652 00:06:27.652 ' 00:06:27.652 08:03:41 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:27.652 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.652 --rc genhtml_branch_coverage=1 00:06:27.652 --rc genhtml_function_coverage=1 00:06:27.652 --rc genhtml_legend=1 00:06:27.652 --rc geninfo_all_blocks=1 00:06:27.652 --rc geninfo_unexecuted_blocks=1 00:06:27.652 00:06:27.652 ' 00:06:27.652 08:03:41 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:27.652 08:03:41 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:27.652 08:03:41 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:06:27.652 08:03:41 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.652 08:03:41 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.652 08:03:41 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:27.652 ************************************ 00:06:27.652 START TEST skip_rpc 00:06:27.652 ************************************ 00:06:27.652 08:03:41 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:06:27.652 08:03:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=1501075 00:06:27.652 08:03:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:06:27.652 08:03:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:27.652 08:03:41 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:06:27.652 [2024-11-20 08:03:41.636906] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:06:27.652 [2024-11-20 08:03:41.636941] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501075 ] 00:06:27.911 [2024-11-20 08:03:41.710684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.911 [2024-11-20 08:03:41.750440] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 1501075 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 1501075 ']' 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 1501075 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1501075 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1501075' 00:06:33.184 killing process with pid 1501075 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 1501075 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 1501075 00:06:33.184 00:06:33.184 real 0m5.360s 00:06:33.184 user 0m5.121s 00:06:33.184 sys 0m0.275s 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.184 08:03:46 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.184 ************************************ 00:06:33.184 END TEST skip_rpc 00:06:33.184 ************************************ 00:06:33.184 08:03:46 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:06:33.184 08:03:46 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:33.184 08:03:46 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.184 08:03:46 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.184 ************************************ 00:06:33.184 START TEST skip_rpc_with_json 00:06:33.184 ************************************ 00:06:33.184 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:06:33.184 08:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:06:33.184 08:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=1501981 00:06:33.184 08:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:33.184 08:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:33.184 08:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 1501981 00:06:33.184 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 1501981 ']' 00:06:33.184 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:33.184 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:33.184 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:33.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:33.184 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:33.184 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.184 [2024-11-20 08:03:47.062395] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:06:33.184 [2024-11-20 08:03:47.062433] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1501981 ] 00:06:33.184 [2024-11-20 08:03:47.135790] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.184 [2024-11-20 08:03:47.177956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.443 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:33.443 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:06:33.443 08:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:06:33.443 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.443 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.443 [2024-11-20 08:03:47.387548] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:06:33.443 request: 00:06:33.443 { 00:06:33.443 "trtype": "tcp", 00:06:33.443 "method": "nvmf_get_transports", 00:06:33.443 "req_id": 1 00:06:33.443 } 00:06:33.443 Got JSON-RPC error response 00:06:33.443 response: 00:06:33.443 { 00:06:33.443 "code": -19, 00:06:33.443 "message": "No such device" 00:06:33.443 } 00:06:33.443 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:33.443 08:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:06:33.443 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.443 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.443 [2024-11-20 08:03:47.399650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:33.443 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.443 08:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:06:33.444 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:33.444 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:33.703 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:33.703 08:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:33.703 { 00:06:33.703 "subsystems": [ 00:06:33.703 { 00:06:33.703 "subsystem": "fsdev", 00:06:33.703 "config": [ 00:06:33.703 { 00:06:33.703 "method": "fsdev_set_opts", 00:06:33.703 "params": { 00:06:33.703 "fsdev_io_pool_size": 65535, 00:06:33.703 "fsdev_io_cache_size": 256 00:06:33.703 } 00:06:33.703 } 00:06:33.703 ] 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "subsystem": "vfio_user_target", 00:06:33.703 "config": null 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "subsystem": "keyring", 00:06:33.703 "config": [] 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "subsystem": "iobuf", 00:06:33.703 "config": [ 00:06:33.703 { 00:06:33.703 "method": "iobuf_set_options", 00:06:33.703 "params": { 00:06:33.703 "small_pool_count": 8192, 00:06:33.703 "large_pool_count": 1024, 00:06:33.703 "small_bufsize": 8192, 00:06:33.703 "large_bufsize": 135168, 00:06:33.703 "enable_numa": false 00:06:33.703 } 00:06:33.703 } 00:06:33.703 ] 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "subsystem": "sock", 00:06:33.703 "config": [ 00:06:33.703 { 00:06:33.703 "method": "sock_set_default_impl", 00:06:33.703 "params": { 00:06:33.703 "impl_name": "posix" 00:06:33.703 } 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "method": "sock_impl_set_options", 00:06:33.703 "params": { 00:06:33.703 "impl_name": "ssl", 00:06:33.703 "recv_buf_size": 4096, 00:06:33.703 "send_buf_size": 4096, 00:06:33.703 "enable_recv_pipe": true, 00:06:33.703 "enable_quickack": false, 00:06:33.703 "enable_placement_id": 0, 00:06:33.703 "enable_zerocopy_send_server": true, 00:06:33.703 "enable_zerocopy_send_client": false, 00:06:33.703 "zerocopy_threshold": 0, 00:06:33.703 "tls_version": 0, 00:06:33.703 "enable_ktls": false 00:06:33.703 } 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "method": "sock_impl_set_options", 00:06:33.703 "params": { 00:06:33.703 "impl_name": "posix", 00:06:33.703 "recv_buf_size": 2097152, 00:06:33.703 "send_buf_size": 2097152, 00:06:33.703 "enable_recv_pipe": true, 00:06:33.703 "enable_quickack": false, 00:06:33.703 "enable_placement_id": 0, 00:06:33.703 "enable_zerocopy_send_server": true, 00:06:33.703 "enable_zerocopy_send_client": false, 00:06:33.703 "zerocopy_threshold": 0, 00:06:33.703 "tls_version": 0, 00:06:33.703 "enable_ktls": false 00:06:33.703 } 00:06:33.703 } 00:06:33.703 ] 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "subsystem": "vmd", 00:06:33.703 "config": [] 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "subsystem": "accel", 00:06:33.703 "config": [ 00:06:33.703 { 00:06:33.703 "method": "accel_set_options", 00:06:33.703 "params": { 00:06:33.703 "small_cache_size": 128, 00:06:33.703 "large_cache_size": 16, 00:06:33.703 "task_count": 2048, 00:06:33.703 "sequence_count": 2048, 00:06:33.703 "buf_count": 2048 00:06:33.703 } 00:06:33.703 } 00:06:33.703 ] 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "subsystem": "bdev", 00:06:33.703 "config": [ 00:06:33.703 { 00:06:33.703 "method": "bdev_set_options", 00:06:33.703 "params": { 00:06:33.703 "bdev_io_pool_size": 65535, 00:06:33.703 "bdev_io_cache_size": 256, 00:06:33.703 "bdev_auto_examine": true, 00:06:33.703 "iobuf_small_cache_size": 128, 00:06:33.703 "iobuf_large_cache_size": 16 00:06:33.703 } 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "method": "bdev_raid_set_options", 00:06:33.703 "params": { 00:06:33.703 "process_window_size_kb": 1024, 00:06:33.703 "process_max_bandwidth_mb_sec": 0 00:06:33.703 } 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "method": "bdev_iscsi_set_options", 00:06:33.703 "params": { 00:06:33.703 "timeout_sec": 30 00:06:33.703 } 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "method": "bdev_nvme_set_options", 00:06:33.703 "params": { 00:06:33.703 "action_on_timeout": "none", 00:06:33.703 "timeout_us": 0, 00:06:33.703 "timeout_admin_us": 0, 00:06:33.703 "keep_alive_timeout_ms": 10000, 00:06:33.703 "arbitration_burst": 0, 00:06:33.703 "low_priority_weight": 0, 00:06:33.703 "medium_priority_weight": 0, 00:06:33.703 "high_priority_weight": 0, 00:06:33.703 "nvme_adminq_poll_period_us": 10000, 00:06:33.703 "nvme_ioq_poll_period_us": 0, 00:06:33.703 "io_queue_requests": 0, 00:06:33.703 "delay_cmd_submit": true, 00:06:33.703 "transport_retry_count": 4, 00:06:33.703 "bdev_retry_count": 3, 00:06:33.703 "transport_ack_timeout": 0, 00:06:33.703 "ctrlr_loss_timeout_sec": 0, 00:06:33.703 "reconnect_delay_sec": 0, 00:06:33.703 "fast_io_fail_timeout_sec": 0, 00:06:33.703 "disable_auto_failback": false, 00:06:33.703 "generate_uuids": false, 00:06:33.703 "transport_tos": 0, 00:06:33.703 "nvme_error_stat": false, 00:06:33.703 "rdma_srq_size": 0, 00:06:33.703 "io_path_stat": false, 00:06:33.703 "allow_accel_sequence": false, 00:06:33.703 "rdma_max_cq_size": 0, 00:06:33.703 "rdma_cm_event_timeout_ms": 0, 00:06:33.703 "dhchap_digests": [ 00:06:33.703 "sha256", 00:06:33.703 "sha384", 00:06:33.703 "sha512" 00:06:33.703 ], 00:06:33.703 "dhchap_dhgroups": [ 00:06:33.703 "null", 00:06:33.703 "ffdhe2048", 00:06:33.703 "ffdhe3072", 00:06:33.703 "ffdhe4096", 00:06:33.703 "ffdhe6144", 00:06:33.703 "ffdhe8192" 00:06:33.703 ] 00:06:33.703 } 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "method": "bdev_nvme_set_hotplug", 00:06:33.703 "params": { 00:06:33.703 "period_us": 100000, 00:06:33.703 "enable": false 00:06:33.703 } 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "method": "bdev_wait_for_examine" 00:06:33.703 } 00:06:33.703 ] 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "subsystem": "scsi", 00:06:33.703 "config": null 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "subsystem": "scheduler", 00:06:33.703 "config": [ 00:06:33.703 { 00:06:33.703 "method": "framework_set_scheduler", 00:06:33.703 "params": { 00:06:33.703 "name": "static" 00:06:33.703 } 00:06:33.703 } 00:06:33.703 ] 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "subsystem": "vhost_scsi", 00:06:33.703 "config": [] 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "subsystem": "vhost_blk", 00:06:33.703 "config": [] 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "subsystem": "ublk", 00:06:33.703 "config": [] 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "subsystem": "nbd", 00:06:33.703 "config": [] 00:06:33.703 }, 00:06:33.703 { 00:06:33.703 "subsystem": "nvmf", 00:06:33.703 "config": [ 00:06:33.704 { 00:06:33.704 "method": "nvmf_set_config", 00:06:33.704 "params": { 00:06:33.704 "discovery_filter": "match_any", 00:06:33.704 "admin_cmd_passthru": { 00:06:33.704 "identify_ctrlr": false 00:06:33.704 }, 00:06:33.704 "dhchap_digests": [ 00:06:33.704 "sha256", 00:06:33.704 "sha384", 00:06:33.704 "sha512" 00:06:33.704 ], 00:06:33.704 "dhchap_dhgroups": [ 00:06:33.704 "null", 00:06:33.704 "ffdhe2048", 00:06:33.704 "ffdhe3072", 00:06:33.704 "ffdhe4096", 00:06:33.704 "ffdhe6144", 00:06:33.704 "ffdhe8192" 00:06:33.704 ] 00:06:33.704 } 00:06:33.704 }, 00:06:33.704 { 00:06:33.704 "method": "nvmf_set_max_subsystems", 00:06:33.704 "params": { 00:06:33.704 "max_subsystems": 1024 00:06:33.704 } 00:06:33.704 }, 00:06:33.704 { 00:06:33.704 "method": "nvmf_set_crdt", 00:06:33.704 "params": { 00:06:33.704 "crdt1": 0, 00:06:33.704 "crdt2": 0, 00:06:33.704 "crdt3": 0 00:06:33.704 } 00:06:33.704 }, 00:06:33.704 { 00:06:33.704 "method": "nvmf_create_transport", 00:06:33.704 "params": { 00:06:33.704 "trtype": "TCP", 00:06:33.704 "max_queue_depth": 128, 00:06:33.704 "max_io_qpairs_per_ctrlr": 127, 00:06:33.704 "in_capsule_data_size": 4096, 00:06:33.704 "max_io_size": 131072, 00:06:33.704 "io_unit_size": 131072, 00:06:33.704 "max_aq_depth": 128, 00:06:33.704 "num_shared_buffers": 511, 00:06:33.704 "buf_cache_size": 4294967295, 00:06:33.704 "dif_insert_or_strip": false, 00:06:33.704 "zcopy": false, 00:06:33.704 "c2h_success": true, 00:06:33.704 "sock_priority": 0, 00:06:33.704 "abort_timeout_sec": 1, 00:06:33.704 "ack_timeout": 0, 00:06:33.704 "data_wr_pool_size": 0 00:06:33.704 } 00:06:33.704 } 00:06:33.704 ] 00:06:33.704 }, 00:06:33.704 { 00:06:33.704 "subsystem": "iscsi", 00:06:33.704 "config": [ 00:06:33.704 { 00:06:33.704 "method": "iscsi_set_options", 00:06:33.704 "params": { 00:06:33.704 "node_base": "iqn.2016-06.io.spdk", 00:06:33.704 "max_sessions": 128, 00:06:33.704 "max_connections_per_session": 2, 00:06:33.704 "max_queue_depth": 64, 00:06:33.704 "default_time2wait": 2, 00:06:33.704 "default_time2retain": 20, 00:06:33.704 "first_burst_length": 8192, 00:06:33.704 "immediate_data": true, 00:06:33.704 "allow_duplicated_isid": false, 00:06:33.704 "error_recovery_level": 0, 00:06:33.704 "nop_timeout": 60, 00:06:33.704 "nop_in_interval": 30, 00:06:33.704 "disable_chap": false, 00:06:33.704 "require_chap": false, 00:06:33.704 "mutual_chap": false, 00:06:33.704 "chap_group": 0, 00:06:33.704 "max_large_datain_per_connection": 64, 00:06:33.704 "max_r2t_per_connection": 4, 00:06:33.704 "pdu_pool_size": 36864, 00:06:33.704 "immediate_data_pool_size": 16384, 00:06:33.704 "data_out_pool_size": 2048 00:06:33.704 } 00:06:33.704 } 00:06:33.704 ] 00:06:33.704 } 00:06:33.704 ] 00:06:33.704 } 00:06:33.704 08:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:06:33.704 08:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 1501981 00:06:33.704 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1501981 ']' 00:06:33.704 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1501981 00:06:33.704 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:33.704 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:33.704 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1501981 00:06:33.704 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:33.704 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:33.704 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1501981' 00:06:33.704 killing process with pid 1501981 00:06:33.704 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1501981 00:06:33.704 08:03:47 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1501981 00:06:33.963 08:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=1502111 00:06:33.963 08:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:33.963 08:03:47 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:06:39.238 08:03:52 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 1502111 00:06:39.238 08:03:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 1502111 ']' 00:06:39.238 08:03:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 1502111 00:06:39.238 08:03:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:06:39.238 08:03:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.238 08:03:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1502111 00:06:39.238 08:03:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.238 08:03:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.238 08:03:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1502111' 00:06:39.238 killing process with pid 1502111 00:06:39.238 08:03:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 1502111 00:06:39.238 08:03:52 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 1502111 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:06:39.498 00:06:39.498 real 0m6.251s 00:06:39.498 user 0m5.950s 00:06:39.498 sys 0m0.581s 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:06:39.498 ************************************ 00:06:39.498 END TEST skip_rpc_with_json 00:06:39.498 ************************************ 00:06:39.498 08:03:53 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:06:39.498 08:03:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.498 08:03:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.498 08:03:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.498 ************************************ 00:06:39.498 START TEST skip_rpc_with_delay 00:06:39.498 ************************************ 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:06:39.498 [2024-11-20 08:03:53.399738] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:39.498 00:06:39.498 real 0m0.072s 00:06:39.498 user 0m0.042s 00:06:39.498 sys 0m0.029s 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:39.498 08:03:53 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:06:39.498 ************************************ 00:06:39.498 END TEST skip_rpc_with_delay 00:06:39.498 ************************************ 00:06:39.498 08:03:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:06:39.498 08:03:53 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:06:39.498 08:03:53 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:06:39.498 08:03:53 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.498 08:03:53 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.498 08:03:53 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:39.498 ************************************ 00:06:39.498 START TEST exit_on_failed_rpc_init 00:06:39.498 ************************************ 00:06:39.498 08:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:06:39.498 08:03:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=1503083 00:06:39.498 08:03:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 1503083 00:06:39.498 08:03:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:39.498 08:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 1503083 ']' 00:06:39.498 08:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.498 08:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.498 08:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.498 08:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.498 08:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:39.758 [2024-11-20 08:03:53.535465] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:06:39.758 [2024-11-20 08:03:53.535508] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503083 ] 00:06:39.758 [2024-11-20 08:03:53.610692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.758 [2024-11-20 08:03:53.652453] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.017 08:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:40.017 08:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:06:40.017 08:03:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:06:40.017 08:03:53 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:40.017 08:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:06:40.017 08:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:40.017 08:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.017 08:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.017 08:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.017 08:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.017 08:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.017 08:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:40.017 08:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:40.017 08:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:06:40.017 08:03:53 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:06:40.017 [2024-11-20 08:03:53.925731] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:06:40.017 [2024-11-20 08:03:53.925776] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503194 ] 00:06:40.017 [2024-11-20 08:03:53.996611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.017 [2024-11-20 08:03:54.036925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:40.017 [2024-11-20 08:03:54.036994] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:06:40.017 [2024-11-20 08:03:54.037003] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:06:40.017 [2024-11-20 08:03:54.037009] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:40.276 08:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:06:40.276 08:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:40.276 08:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:06:40.276 08:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:06:40.276 08:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:06:40.276 08:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:40.276 08:03:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:40.276 08:03:54 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 1503083 00:06:40.276 08:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 1503083 ']' 00:06:40.276 08:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 1503083 00:06:40.276 08:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:06:40.276 08:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:40.276 08:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1503083 00:06:40.276 08:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:40.276 08:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:40.276 08:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1503083' 00:06:40.276 killing process with pid 1503083 00:06:40.276 08:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 1503083 00:06:40.276 08:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 1503083 00:06:40.535 00:06:40.535 real 0m0.933s 00:06:40.535 user 0m0.982s 00:06:40.535 sys 0m0.370s 00:06:40.535 08:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.535 08:03:54 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:06:40.535 ************************************ 00:06:40.535 END TEST exit_on_failed_rpc_init 00:06:40.535 ************************************ 00:06:40.535 08:03:54 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:06:40.535 00:06:40.535 real 0m13.071s 00:06:40.535 user 0m12.327s 00:06:40.535 sys 0m1.509s 00:06:40.535 08:03:54 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.535 08:03:54 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:40.535 ************************************ 00:06:40.535 END TEST skip_rpc 00:06:40.535 ************************************ 00:06:40.535 08:03:54 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:40.535 08:03:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.535 08:03:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.535 08:03:54 -- common/autotest_common.sh@10 -- # set +x 00:06:40.535 ************************************ 00:06:40.535 START TEST rpc_client 00:06:40.535 ************************************ 00:06:40.535 08:03:54 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:06:40.795 * Looking for test storage... 00:06:40.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:06:40.795 08:03:54 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:40.795 08:03:54 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:06:40.795 08:03:54 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:40.795 08:03:54 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@345 -- # : 1 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@353 -- # local d=1 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@355 -- # echo 1 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@353 -- # local d=2 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@355 -- # echo 2 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.795 08:03:54 rpc_client -- scripts/common.sh@368 -- # return 0 00:06:40.795 08:03:54 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.795 08:03:54 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:40.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.795 --rc genhtml_branch_coverage=1 00:06:40.795 --rc genhtml_function_coverage=1 00:06:40.795 --rc genhtml_legend=1 00:06:40.795 --rc geninfo_all_blocks=1 00:06:40.795 --rc geninfo_unexecuted_blocks=1 00:06:40.795 00:06:40.795 ' 00:06:40.795 08:03:54 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:40.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.795 --rc genhtml_branch_coverage=1 00:06:40.795 --rc genhtml_function_coverage=1 00:06:40.795 --rc genhtml_legend=1 00:06:40.795 --rc geninfo_all_blocks=1 00:06:40.795 --rc geninfo_unexecuted_blocks=1 00:06:40.795 00:06:40.795 ' 00:06:40.795 08:03:54 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:40.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.795 --rc genhtml_branch_coverage=1 00:06:40.795 --rc genhtml_function_coverage=1 00:06:40.795 --rc genhtml_legend=1 00:06:40.795 --rc geninfo_all_blocks=1 00:06:40.795 --rc geninfo_unexecuted_blocks=1 00:06:40.795 00:06:40.795 ' 00:06:40.795 08:03:54 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:40.795 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.795 --rc genhtml_branch_coverage=1 00:06:40.795 --rc genhtml_function_coverage=1 00:06:40.795 --rc genhtml_legend=1 00:06:40.795 --rc geninfo_all_blocks=1 00:06:40.795 --rc geninfo_unexecuted_blocks=1 00:06:40.795 00:06:40.795 ' 00:06:40.795 08:03:54 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:06:40.795 OK 00:06:40.795 08:03:54 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:06:40.795 00:06:40.795 real 0m0.195s 00:06:40.795 user 0m0.122s 00:06:40.795 sys 0m0.088s 00:06:40.795 08:03:54 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.795 08:03:54 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:06:40.795 ************************************ 00:06:40.795 END TEST rpc_client 00:06:40.795 ************************************ 00:06:40.795 08:03:54 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:40.795 08:03:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:40.795 08:03:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.795 08:03:54 -- common/autotest_common.sh@10 -- # set +x 00:06:40.795 ************************************ 00:06:40.795 START TEST json_config 00:06:40.795 ************************************ 00:06:40.795 08:03:54 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:06:41.057 08:03:54 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:41.057 08:03:54 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:06:41.057 08:03:54 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:41.057 08:03:54 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:41.057 08:03:54 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:41.057 08:03:54 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:41.057 08:03:54 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:41.057 08:03:54 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:06:41.057 08:03:54 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:06:41.057 08:03:54 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:06:41.057 08:03:54 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:06:41.057 08:03:54 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:06:41.057 08:03:54 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:06:41.057 08:03:54 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:06:41.057 08:03:54 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:41.057 08:03:54 json_config -- scripts/common.sh@344 -- # case "$op" in 00:06:41.057 08:03:54 json_config -- scripts/common.sh@345 -- # : 1 00:06:41.057 08:03:54 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:41.057 08:03:54 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:41.057 08:03:54 json_config -- scripts/common.sh@365 -- # decimal 1 00:06:41.057 08:03:54 json_config -- scripts/common.sh@353 -- # local d=1 00:06:41.057 08:03:54 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:41.057 08:03:54 json_config -- scripts/common.sh@355 -- # echo 1 00:06:41.057 08:03:54 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:06:41.057 08:03:54 json_config -- scripts/common.sh@366 -- # decimal 2 00:06:41.057 08:03:54 json_config -- scripts/common.sh@353 -- # local d=2 00:06:41.057 08:03:54 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:41.057 08:03:54 json_config -- scripts/common.sh@355 -- # echo 2 00:06:41.057 08:03:54 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:06:41.057 08:03:54 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:41.057 08:03:54 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:41.057 08:03:54 json_config -- scripts/common.sh@368 -- # return 0 00:06:41.057 08:03:54 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:41.057 08:03:54 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:41.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.057 --rc genhtml_branch_coverage=1 00:06:41.057 --rc genhtml_function_coverage=1 00:06:41.057 --rc genhtml_legend=1 00:06:41.057 --rc geninfo_all_blocks=1 00:06:41.057 --rc geninfo_unexecuted_blocks=1 00:06:41.057 00:06:41.057 ' 00:06:41.057 08:03:54 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:41.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.057 --rc genhtml_branch_coverage=1 00:06:41.057 --rc genhtml_function_coverage=1 00:06:41.057 --rc genhtml_legend=1 00:06:41.057 --rc geninfo_all_blocks=1 00:06:41.057 --rc geninfo_unexecuted_blocks=1 00:06:41.057 00:06:41.057 ' 00:06:41.057 08:03:54 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:41.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.057 --rc genhtml_branch_coverage=1 00:06:41.057 --rc genhtml_function_coverage=1 00:06:41.057 --rc genhtml_legend=1 00:06:41.057 --rc geninfo_all_blocks=1 00:06:41.057 --rc geninfo_unexecuted_blocks=1 00:06:41.057 00:06:41.057 ' 00:06:41.057 08:03:54 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:41.057 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:41.057 --rc genhtml_branch_coverage=1 00:06:41.057 --rc genhtml_function_coverage=1 00:06:41.057 --rc genhtml_legend=1 00:06:41.057 --rc geninfo_all_blocks=1 00:06:41.057 --rc geninfo_unexecuted_blocks=1 00:06:41.057 00:06:41.057 ' 00:06:41.057 08:03:54 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:41.057 08:03:54 json_config -- nvmf/common.sh@7 -- # uname -s 00:06:41.057 08:03:54 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:41.057 08:03:54 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:41.057 08:03:54 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:41.057 08:03:54 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:41.057 08:03:54 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:41.057 08:03:54 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:06:41.057 08:03:54 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:41.057 08:03:54 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:06:41.057 08:03:54 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:41.057 08:03:54 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:41.057 08:03:54 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:41.057 08:03:54 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:06:41.057 08:03:54 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:06:41.057 08:03:54 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:41.057 08:03:54 json_config -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:41.057 08:03:54 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:06:41.057 08:03:54 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:41.057 08:03:54 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:41.057 08:03:54 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:41.057 08:03:54 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.057 08:03:54 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.057 08:03:54 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.058 08:03:54 json_config -- paths/export.sh@5 -- # export PATH 00:06:41.058 08:03:54 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:41.058 08:03:54 json_config -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:06:41.058 08:03:54 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:06:41.058 08:03:54 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:41.058 08:03:54 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:06:41.058 08:03:54 json_config -- nvmf/common.sh@50 -- # : 0 00:06:41.058 08:03:54 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:06:41.058 08:03:54 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:06:41.058 08:03:54 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:06:41.058 08:03:54 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:41.058 08:03:54 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:41.058 08:03:54 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:06:41.058 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:06:41.058 08:03:54 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:06:41.058 08:03:54 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:06:41.058 08:03:54 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:06:41.058 08:03:54 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:41.058 08:03:54 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:06:41.058 08:03:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:06:41.058 08:03:54 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:06:41.058 08:03:54 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:06:41.058 08:03:54 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:06:41.058 08:03:54 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:06:41.058 08:03:54 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:06:41.058 08:03:54 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:06:41.058 08:03:54 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:06:41.058 08:03:54 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:06:41.058 08:03:54 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:06:41.058 08:03:54 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:06:41.058 08:03:54 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:06:41.058 08:03:54 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:41.058 08:03:54 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:06:41.058 INFO: JSON configuration test init 00:06:41.058 08:03:54 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:06:41.058 08:03:54 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:06:41.058 08:03:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:41.058 08:03:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.058 08:03:54 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:06:41.058 08:03:54 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:41.058 08:03:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.058 08:03:54 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:06:41.058 08:03:54 json_config -- json_config/common.sh@9 -- # local app=target 00:06:41.058 08:03:54 json_config -- json_config/common.sh@10 -- # shift 00:06:41.058 08:03:54 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:41.058 08:03:54 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:41.058 08:03:54 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:41.058 08:03:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:41.058 08:03:54 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:41.058 08:03:54 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1503453 00:06:41.058 08:03:54 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:41.058 Waiting for target to run... 00:06:41.058 08:03:54 json_config -- json_config/common.sh@25 -- # waitforlisten 1503453 /var/tmp/spdk_tgt.sock 00:06:41.058 08:03:54 json_config -- common/autotest_common.sh@835 -- # '[' -z 1503453 ']' 00:06:41.058 08:03:54 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:06:41.058 08:03:54 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:41.058 08:03:54 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.058 08:03:54 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:41.058 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:41.058 08:03:54 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.058 08:03:54 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.058 [2024-11-20 08:03:55.040323] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:06:41.058 [2024-11-20 08:03:55.040374] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1503453 ] 00:06:41.627 [2024-11-20 08:03:55.504450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.627 [2024-11-20 08:03:55.558890] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.886 08:03:55 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.886 08:03:55 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:41.886 08:03:55 json_config -- json_config/common.sh@26 -- # echo '' 00:06:41.886 00:06:41.886 08:03:55 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:06:41.886 08:03:55 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:06:41.886 08:03:55 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:41.886 08:03:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.886 08:03:55 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:06:41.886 08:03:55 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:06:41.886 08:03:55 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:41.886 08:03:55 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:41.886 08:03:55 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:06:41.886 08:03:55 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:06:41.886 08:03:55 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:06:45.174 08:03:59 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:06:45.174 08:03:59 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:06:45.174 08:03:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:45.174 08:03:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.174 08:03:59 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:06:45.174 08:03:59 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:06:45.174 08:03:59 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:06:45.174 08:03:59 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:06:45.174 08:03:59 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:06:45.174 08:03:59 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:06:45.174 08:03:59 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:06:45.174 08:03:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@51 -- # local get_types 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@54 -- # sort 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:06:45.433 08:03:59 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:45.433 08:03:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@62 -- # return 0 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:06:45.433 08:03:59 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:45.433 08:03:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:45.433 08:03:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:06:45.433 MallocForNvmf0 00:06:45.433 08:03:59 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:45.433 08:03:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:06:45.691 MallocForNvmf1 00:06:45.691 08:03:59 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:06:45.691 08:03:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:06:45.950 [2024-11-20 08:03:59.782276] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:45.950 08:03:59 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:45.950 08:03:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:06:46.208 08:03:59 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:46.208 08:03:59 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:06:46.208 08:04:00 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:46.208 08:04:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:06:46.467 08:04:00 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:46.467 08:04:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:06:46.726 [2024-11-20 08:04:00.524625] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:46.726 08:04:00 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:06:46.726 08:04:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:46.726 08:04:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.726 08:04:00 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:06:46.726 08:04:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:46.726 08:04:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.726 08:04:00 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:06:46.726 08:04:00 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:46.726 08:04:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:06:46.985 MallocBdevForConfigChangeCheck 00:06:46.985 08:04:00 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:06:46.985 08:04:00 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:46.985 08:04:00 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:46.985 08:04:00 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:06:46.985 08:04:00 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:47.244 08:04:01 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:06:47.244 INFO: shutting down applications... 00:06:47.244 08:04:01 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:06:47.244 08:04:01 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:06:47.244 08:04:01 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:06:47.244 08:04:01 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:06:49.779 Calling clear_iscsi_subsystem 00:06:49.779 Calling clear_nvmf_subsystem 00:06:49.779 Calling clear_nbd_subsystem 00:06:49.779 Calling clear_ublk_subsystem 00:06:49.779 Calling clear_vhost_blk_subsystem 00:06:49.779 Calling clear_vhost_scsi_subsystem 00:06:49.779 Calling clear_bdev_subsystem 00:06:49.779 08:04:03 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:06:49.779 08:04:03 json_config -- json_config/json_config.sh@350 -- # count=100 00:06:49.779 08:04:03 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:06:49.779 08:04:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:49.779 08:04:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:06:49.779 08:04:03 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:06:49.779 08:04:03 json_config -- json_config/json_config.sh@352 -- # break 00:06:49.779 08:04:03 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:06:49.779 08:04:03 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:06:49.779 08:04:03 json_config -- json_config/common.sh@31 -- # local app=target 00:06:49.779 08:04:03 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:49.779 08:04:03 json_config -- json_config/common.sh@35 -- # [[ -n 1503453 ]] 00:06:49.779 08:04:03 json_config -- json_config/common.sh@38 -- # kill -SIGINT 1503453 00:06:49.779 08:04:03 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:49.779 08:04:03 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:49.779 08:04:03 json_config -- json_config/common.sh@41 -- # kill -0 1503453 00:06:49.779 08:04:03 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:06:50.347 08:04:04 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:06:50.347 08:04:04 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:50.347 08:04:04 json_config -- json_config/common.sh@41 -- # kill -0 1503453 00:06:50.347 08:04:04 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:50.347 08:04:04 json_config -- json_config/common.sh@43 -- # break 00:06:50.347 08:04:04 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:50.347 08:04:04 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:50.347 SPDK target shutdown done 00:06:50.348 08:04:04 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:06:50.348 INFO: relaunching applications... 00:06:50.348 08:04:04 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:50.348 08:04:04 json_config -- json_config/common.sh@9 -- # local app=target 00:06:50.348 08:04:04 json_config -- json_config/common.sh@10 -- # shift 00:06:50.348 08:04:04 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:50.348 08:04:04 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:50.348 08:04:04 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:06:50.348 08:04:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:50.348 08:04:04 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:50.348 08:04:04 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=1505191 00:06:50.348 08:04:04 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:50.348 Waiting for target to run... 00:06:50.348 08:04:04 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:50.348 08:04:04 json_config -- json_config/common.sh@25 -- # waitforlisten 1505191 /var/tmp/spdk_tgt.sock 00:06:50.348 08:04:04 json_config -- common/autotest_common.sh@835 -- # '[' -z 1505191 ']' 00:06:50.348 08:04:04 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:50.348 08:04:04 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:50.348 08:04:04 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:50.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:50.348 08:04:04 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:50.348 08:04:04 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:50.348 [2024-11-20 08:04:04.290934] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:06:50.348 [2024-11-20 08:04:04.290988] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1505191 ] 00:06:50.607 [2024-11-20 08:04:04.578376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:50.607 [2024-11-20 08:04:04.611913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.896 [2024-11-20 08:04:07.644080] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:53.896 [2024-11-20 08:04:07.676445] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:53.896 08:04:07 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:53.896 08:04:07 json_config -- common/autotest_common.sh@868 -- # return 0 00:06:53.896 08:04:07 json_config -- json_config/common.sh@26 -- # echo '' 00:06:53.896 00:06:53.896 08:04:07 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:06:53.896 08:04:07 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:53.896 INFO: Checking if target configuration is the same... 00:06:53.896 08:04:07 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:53.897 08:04:07 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:06:53.897 08:04:07 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:53.897 + '[' 2 -ne 2 ']' 00:06:53.897 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:53.897 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:53.897 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:53.897 +++ basename /dev/fd/62 00:06:53.897 ++ mktemp /tmp/62.XXX 00:06:53.897 + tmp_file_1=/tmp/62.lLQ 00:06:53.897 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:53.897 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:53.897 + tmp_file_2=/tmp/spdk_tgt_config.json.a12 00:06:53.897 + ret=0 00:06:53.897 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:54.155 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:54.155 + diff -u /tmp/62.lLQ /tmp/spdk_tgt_config.json.a12 00:06:54.155 + echo 'INFO: JSON config files are the same' 00:06:54.155 INFO: JSON config files are the same 00:06:54.155 + rm /tmp/62.lLQ /tmp/spdk_tgt_config.json.a12 00:06:54.155 + exit 0 00:06:54.155 08:04:08 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:06:54.155 08:04:08 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:54.155 INFO: changing configuration and checking if this can be detected... 00:06:54.155 08:04:08 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:54.155 08:04:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:54.414 08:04:08 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:06:54.414 08:04:08 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:54.414 08:04:08 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:54.414 + '[' 2 -ne 2 ']' 00:06:54.414 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:54.414 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:54.414 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:54.414 +++ basename /dev/fd/62 00:06:54.414 ++ mktemp /tmp/62.XXX 00:06:54.414 + tmp_file_1=/tmp/62.Jkp 00:06:54.414 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:54.414 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:54.414 + tmp_file_2=/tmp/spdk_tgt_config.json.grk 00:06:54.414 + ret=0 00:06:54.414 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:54.673 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:54.673 + diff -u /tmp/62.Jkp /tmp/spdk_tgt_config.json.grk 00:06:54.673 + ret=1 00:06:54.673 + echo '=== Start of file: /tmp/62.Jkp ===' 00:06:54.673 + cat /tmp/62.Jkp 00:06:54.673 + echo '=== End of file: /tmp/62.Jkp ===' 00:06:54.673 + echo '' 00:06:54.673 + echo '=== Start of file: /tmp/spdk_tgt_config.json.grk ===' 00:06:54.673 + cat /tmp/spdk_tgt_config.json.grk 00:06:54.673 + echo '=== End of file: /tmp/spdk_tgt_config.json.grk ===' 00:06:54.673 + echo '' 00:06:54.673 + rm /tmp/62.Jkp /tmp/spdk_tgt_config.json.grk 00:06:54.673 + exit 1 00:06:54.932 08:04:08 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:06:54.932 INFO: configuration change detected. 00:06:54.932 08:04:08 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:06:54.932 08:04:08 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:06:54.932 08:04:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:54.932 08:04:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.932 08:04:08 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:06:54.932 08:04:08 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:06:54.932 08:04:08 json_config -- json_config/json_config.sh@324 -- # [[ -n 1505191 ]] 00:06:54.932 08:04:08 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:06:54.932 08:04:08 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:06:54.932 08:04:08 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:54.932 08:04:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.932 08:04:08 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:06:54.932 08:04:08 json_config -- json_config/json_config.sh@200 -- # uname -s 00:06:54.932 08:04:08 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:06:54.932 08:04:08 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:06:54.932 08:04:08 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:06:54.932 08:04:08 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:06:54.932 08:04:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:54.932 08:04:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:54.932 08:04:08 json_config -- json_config/json_config.sh@330 -- # killprocess 1505191 00:06:54.932 08:04:08 json_config -- common/autotest_common.sh@954 -- # '[' -z 1505191 ']' 00:06:54.932 08:04:08 json_config -- common/autotest_common.sh@958 -- # kill -0 1505191 00:06:54.932 08:04:08 json_config -- common/autotest_common.sh@959 -- # uname 00:06:54.932 08:04:08 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:54.932 08:04:08 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1505191 00:06:54.932 08:04:08 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:54.932 08:04:08 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:54.932 08:04:08 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1505191' 00:06:54.932 killing process with pid 1505191 00:06:54.932 08:04:08 json_config -- common/autotest_common.sh@973 -- # kill 1505191 00:06:54.932 08:04:08 json_config -- common/autotest_common.sh@978 -- # wait 1505191 00:06:56.836 08:04:10 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:56.836 08:04:10 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:06:56.836 08:04:10 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:56.836 08:04:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:56.836 08:04:10 json_config -- json_config/json_config.sh@335 -- # return 0 00:06:56.836 08:04:10 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:06:56.836 INFO: Success 00:06:56.836 00:06:56.836 real 0m16.051s 00:06:56.836 user 0m16.389s 00:06:56.836 sys 0m2.532s 00:06:56.836 08:04:10 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:56.836 08:04:10 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:56.836 ************************************ 00:06:56.836 END TEST json_config 00:06:56.836 ************************************ 00:06:57.096 08:04:10 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:57.096 08:04:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:57.096 08:04:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:57.096 08:04:10 -- common/autotest_common.sh@10 -- # set +x 00:06:57.096 ************************************ 00:06:57.096 START TEST json_config_extra_key 00:06:57.096 ************************************ 00:06:57.096 08:04:10 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:57.096 08:04:10 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:57.096 08:04:10 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:06:57.096 08:04:10 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:57.096 08:04:11 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:57.096 08:04:11 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:57.096 08:04:11 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:57.096 08:04:11 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:57.096 08:04:11 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:06:57.096 08:04:11 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:06:57.096 08:04:11 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:06:57.096 08:04:11 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:06:57.096 08:04:11 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:06:57.096 08:04:11 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:06:57.096 08:04:11 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:06:57.097 08:04:11 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:57.097 08:04:11 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:57.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.097 --rc genhtml_branch_coverage=1 00:06:57.097 --rc genhtml_function_coverage=1 00:06:57.097 --rc genhtml_legend=1 00:06:57.097 --rc geninfo_all_blocks=1 00:06:57.097 --rc geninfo_unexecuted_blocks=1 00:06:57.097 00:06:57.097 ' 00:06:57.097 08:04:11 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:57.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.097 --rc genhtml_branch_coverage=1 00:06:57.097 --rc genhtml_function_coverage=1 00:06:57.097 --rc genhtml_legend=1 00:06:57.097 --rc geninfo_all_blocks=1 00:06:57.097 --rc geninfo_unexecuted_blocks=1 00:06:57.097 00:06:57.097 ' 00:06:57.097 08:04:11 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:57.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.097 --rc genhtml_branch_coverage=1 00:06:57.097 --rc genhtml_function_coverage=1 00:06:57.097 --rc genhtml_legend=1 00:06:57.097 --rc geninfo_all_blocks=1 00:06:57.097 --rc geninfo_unexecuted_blocks=1 00:06:57.097 00:06:57.097 ' 00:06:57.097 08:04:11 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:57.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:57.097 --rc genhtml_branch_coverage=1 00:06:57.097 --rc genhtml_function_coverage=1 00:06:57.097 --rc genhtml_legend=1 00:06:57.097 --rc geninfo_all_blocks=1 00:06:57.097 --rc geninfo_unexecuted_blocks=1 00:06:57.097 00:06:57.097 ' 00:06:57.097 08:04:11 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:57.097 08:04:11 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:57.097 08:04:11 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.097 08:04:11 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.097 08:04:11 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.097 08:04:11 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:57.097 08:04:11 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:06:57.097 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:06:57.097 08:04:11 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:06:57.097 08:04:11 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:57.097 08:04:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:57.097 08:04:11 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:57.097 08:04:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:57.097 08:04:11 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:57.097 08:04:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:57.097 08:04:11 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:57.097 08:04:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:57.097 08:04:11 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:57.097 08:04:11 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:57.097 08:04:11 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:57.097 INFO: launching applications... 00:06:57.097 08:04:11 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:57.097 08:04:11 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:57.097 08:04:11 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:57.097 08:04:11 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:57.097 08:04:11 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:57.097 08:04:11 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:57.097 08:04:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:57.097 08:04:11 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:57.097 08:04:11 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=1506465 00:06:57.097 08:04:11 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:57.097 Waiting for target to run... 00:06:57.098 08:04:11 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 1506465 /var/tmp/spdk_tgt.sock 00:06:57.098 08:04:11 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 1506465 ']' 00:06:57.098 08:04:11 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:57.098 08:04:11 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:57.098 08:04:11 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.098 08:04:11 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:57.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:57.098 08:04:11 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.098 08:04:11 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:57.357 [2024-11-20 08:04:11.149228] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:06:57.357 [2024-11-20 08:04:11.149278] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1506465 ] 00:06:57.616 [2024-11-20 08:04:11.594030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.875 [2024-11-20 08:04:11.644926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.134 08:04:11 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:58.134 08:04:11 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:06:58.134 08:04:12 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:58.134 00:06:58.134 08:04:12 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:58.134 INFO: shutting down applications... 00:06:58.134 08:04:12 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:58.134 08:04:12 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:58.134 08:04:12 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:58.134 08:04:12 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 1506465 ]] 00:06:58.134 08:04:12 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 1506465 00:06:58.134 08:04:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:58.134 08:04:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:58.134 08:04:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1506465 00:06:58.134 08:04:12 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:58.703 08:04:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:58.703 08:04:12 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:58.703 08:04:12 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 1506465 00:06:58.703 08:04:12 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:58.703 08:04:12 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:58.703 08:04:12 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:58.703 08:04:12 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:58.703 SPDK target shutdown done 00:06:58.703 08:04:12 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:58.703 Success 00:06:58.703 00:06:58.703 real 0m1.601s 00:06:58.703 user 0m1.251s 00:06:58.703 sys 0m0.559s 00:06:58.703 08:04:12 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.703 08:04:12 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:58.703 ************************************ 00:06:58.703 END TEST json_config_extra_key 00:06:58.703 ************************************ 00:06:58.703 08:04:12 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:58.703 08:04:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:58.703 08:04:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.703 08:04:12 -- common/autotest_common.sh@10 -- # set +x 00:06:58.703 ************************************ 00:06:58.703 START TEST alias_rpc 00:06:58.703 ************************************ 00:06:58.703 08:04:12 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:58.703 * Looking for test storage... 00:06:58.703 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:58.703 08:04:12 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:58.703 08:04:12 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:06:58.703 08:04:12 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:58.962 08:04:12 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:58.962 08:04:12 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:58.962 08:04:12 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:58.962 08:04:12 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@345 -- # : 1 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:58.963 08:04:12 alias_rpc -- scripts/common.sh@368 -- # return 0 00:06:58.963 08:04:12 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:58.963 08:04:12 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:58.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.963 --rc genhtml_branch_coverage=1 00:06:58.963 --rc genhtml_function_coverage=1 00:06:58.963 --rc genhtml_legend=1 00:06:58.963 --rc geninfo_all_blocks=1 00:06:58.963 --rc geninfo_unexecuted_blocks=1 00:06:58.963 00:06:58.963 ' 00:06:58.963 08:04:12 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:58.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.963 --rc genhtml_branch_coverage=1 00:06:58.963 --rc genhtml_function_coverage=1 00:06:58.963 --rc genhtml_legend=1 00:06:58.963 --rc geninfo_all_blocks=1 00:06:58.963 --rc geninfo_unexecuted_blocks=1 00:06:58.963 00:06:58.963 ' 00:06:58.963 08:04:12 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:58.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.963 --rc genhtml_branch_coverage=1 00:06:58.963 --rc genhtml_function_coverage=1 00:06:58.963 --rc genhtml_legend=1 00:06:58.963 --rc geninfo_all_blocks=1 00:06:58.963 --rc geninfo_unexecuted_blocks=1 00:06:58.963 00:06:58.963 ' 00:06:58.963 08:04:12 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:58.963 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:58.963 --rc genhtml_branch_coverage=1 00:06:58.963 --rc genhtml_function_coverage=1 00:06:58.963 --rc genhtml_legend=1 00:06:58.963 --rc geninfo_all_blocks=1 00:06:58.963 --rc geninfo_unexecuted_blocks=1 00:06:58.963 00:06:58.963 ' 00:06:58.963 08:04:12 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:58.963 08:04:12 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=1506754 00:06:58.963 08:04:12 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 1506754 00:06:58.963 08:04:12 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:58.963 08:04:12 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 1506754 ']' 00:06:58.963 08:04:12 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:58.963 08:04:12 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:58.963 08:04:12 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:58.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:58.963 08:04:12 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:58.963 08:04:12 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:58.963 [2024-11-20 08:04:12.813652] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:06:58.963 [2024-11-20 08:04:12.813698] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1506754 ] 00:06:58.963 [2024-11-20 08:04:12.887353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.963 [2024-11-20 08:04:12.926574] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.224 08:04:13 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:59.224 08:04:13 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:59.224 08:04:13 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:59.484 08:04:13 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 1506754 00:06:59.484 08:04:13 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 1506754 ']' 00:06:59.484 08:04:13 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 1506754 00:06:59.484 08:04:13 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:06:59.484 08:04:13 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:59.484 08:04:13 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1506754 00:06:59.484 08:04:13 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:59.484 08:04:13 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:59.484 08:04:13 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1506754' 00:06:59.484 killing process with pid 1506754 00:06:59.484 08:04:13 alias_rpc -- common/autotest_common.sh@973 -- # kill 1506754 00:06:59.484 08:04:13 alias_rpc -- common/autotest_common.sh@978 -- # wait 1506754 00:06:59.743 00:06:59.743 real 0m1.136s 00:06:59.743 user 0m1.151s 00:06:59.743 sys 0m0.409s 00:06:59.743 08:04:13 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:59.743 08:04:13 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:59.743 ************************************ 00:06:59.743 END TEST alias_rpc 00:06:59.743 ************************************ 00:06:59.743 08:04:13 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:06:59.743 08:04:13 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:59.743 08:04:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:59.743 08:04:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:59.743 08:04:13 -- common/autotest_common.sh@10 -- # set +x 00:07:00.002 ************************************ 00:07:00.002 START TEST spdkcli_tcp 00:07:00.002 ************************************ 00:07:00.002 08:04:13 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:07:00.002 * Looking for test storage... 00:07:00.002 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:07:00.002 08:04:13 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:00.002 08:04:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:00.002 08:04:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:00.002 08:04:13 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:00.002 08:04:13 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:07:00.002 08:04:13 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:00.002 08:04:13 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:00.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.002 --rc genhtml_branch_coverage=1 00:07:00.002 --rc genhtml_function_coverage=1 00:07:00.002 --rc genhtml_legend=1 00:07:00.002 --rc geninfo_all_blocks=1 00:07:00.002 --rc geninfo_unexecuted_blocks=1 00:07:00.002 00:07:00.002 ' 00:07:00.002 08:04:13 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:00.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.002 --rc genhtml_branch_coverage=1 00:07:00.002 --rc genhtml_function_coverage=1 00:07:00.002 --rc genhtml_legend=1 00:07:00.002 --rc geninfo_all_blocks=1 00:07:00.002 --rc geninfo_unexecuted_blocks=1 00:07:00.002 00:07:00.002 ' 00:07:00.002 08:04:13 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:00.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.002 --rc genhtml_branch_coverage=1 00:07:00.002 --rc genhtml_function_coverage=1 00:07:00.002 --rc genhtml_legend=1 00:07:00.002 --rc geninfo_all_blocks=1 00:07:00.002 --rc geninfo_unexecuted_blocks=1 00:07:00.002 00:07:00.002 ' 00:07:00.002 08:04:13 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:00.002 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:00.002 --rc genhtml_branch_coverage=1 00:07:00.002 --rc genhtml_function_coverage=1 00:07:00.002 --rc genhtml_legend=1 00:07:00.002 --rc geninfo_all_blocks=1 00:07:00.002 --rc geninfo_unexecuted_blocks=1 00:07:00.002 00:07:00.002 ' 00:07:00.002 08:04:13 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:07:00.002 08:04:13 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:07:00.002 08:04:13 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:07:00.002 08:04:13 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:07:00.002 08:04:13 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:07:00.002 08:04:13 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:07:00.002 08:04:13 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:07:00.002 08:04:13 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:00.002 08:04:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:00.002 08:04:13 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=1507041 00:07:00.002 08:04:13 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:07:00.002 08:04:13 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 1507041 00:07:00.002 08:04:13 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 1507041 ']' 00:07:00.002 08:04:13 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:00.002 08:04:13 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:00.002 08:04:13 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:00.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:00.002 08:04:13 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:00.002 08:04:13 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:00.002 [2024-11-20 08:04:14.011145] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:00.002 [2024-11-20 08:04:14.011194] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507041 ] 00:07:00.262 [2024-11-20 08:04:14.087267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:00.262 [2024-11-20 08:04:14.128002] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.262 [2024-11-20 08:04:14.128003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:00.521 08:04:14 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:00.521 08:04:14 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:07:00.521 08:04:14 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=1507059 00:07:00.521 08:04:14 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:07:00.521 08:04:14 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:07:00.521 [ 00:07:00.521 "bdev_malloc_delete", 00:07:00.521 "bdev_malloc_create", 00:07:00.521 "bdev_null_resize", 00:07:00.521 "bdev_null_delete", 00:07:00.521 "bdev_null_create", 00:07:00.521 "bdev_nvme_cuse_unregister", 00:07:00.521 "bdev_nvme_cuse_register", 00:07:00.521 "bdev_opal_new_user", 00:07:00.521 "bdev_opal_set_lock_state", 00:07:00.521 "bdev_opal_delete", 00:07:00.521 "bdev_opal_get_info", 00:07:00.521 "bdev_opal_create", 00:07:00.521 "bdev_nvme_opal_revert", 00:07:00.521 "bdev_nvme_opal_init", 00:07:00.521 "bdev_nvme_send_cmd", 00:07:00.521 "bdev_nvme_set_keys", 00:07:00.521 "bdev_nvme_get_path_iostat", 00:07:00.521 "bdev_nvme_get_mdns_discovery_info", 00:07:00.521 "bdev_nvme_stop_mdns_discovery", 00:07:00.521 "bdev_nvme_start_mdns_discovery", 00:07:00.521 "bdev_nvme_set_multipath_policy", 00:07:00.521 "bdev_nvme_set_preferred_path", 00:07:00.521 "bdev_nvme_get_io_paths", 00:07:00.521 "bdev_nvme_remove_error_injection", 00:07:00.521 "bdev_nvme_add_error_injection", 00:07:00.521 "bdev_nvme_get_discovery_info", 00:07:00.521 "bdev_nvme_stop_discovery", 00:07:00.521 "bdev_nvme_start_discovery", 00:07:00.521 "bdev_nvme_get_controller_health_info", 00:07:00.521 "bdev_nvme_disable_controller", 00:07:00.521 "bdev_nvme_enable_controller", 00:07:00.521 "bdev_nvme_reset_controller", 00:07:00.521 "bdev_nvme_get_transport_statistics", 00:07:00.521 "bdev_nvme_apply_firmware", 00:07:00.521 "bdev_nvme_detach_controller", 00:07:00.521 "bdev_nvme_get_controllers", 00:07:00.521 "bdev_nvme_attach_controller", 00:07:00.521 "bdev_nvme_set_hotplug", 00:07:00.521 "bdev_nvme_set_options", 00:07:00.521 "bdev_passthru_delete", 00:07:00.521 "bdev_passthru_create", 00:07:00.521 "bdev_lvol_set_parent_bdev", 00:07:00.521 "bdev_lvol_set_parent", 00:07:00.521 "bdev_lvol_check_shallow_copy", 00:07:00.521 "bdev_lvol_start_shallow_copy", 00:07:00.521 "bdev_lvol_grow_lvstore", 00:07:00.521 "bdev_lvol_get_lvols", 00:07:00.521 "bdev_lvol_get_lvstores", 00:07:00.521 "bdev_lvol_delete", 00:07:00.521 "bdev_lvol_set_read_only", 00:07:00.521 "bdev_lvol_resize", 00:07:00.521 "bdev_lvol_decouple_parent", 00:07:00.521 "bdev_lvol_inflate", 00:07:00.521 "bdev_lvol_rename", 00:07:00.521 "bdev_lvol_clone_bdev", 00:07:00.521 "bdev_lvol_clone", 00:07:00.521 "bdev_lvol_snapshot", 00:07:00.521 "bdev_lvol_create", 00:07:00.521 "bdev_lvol_delete_lvstore", 00:07:00.521 "bdev_lvol_rename_lvstore", 00:07:00.521 "bdev_lvol_create_lvstore", 00:07:00.521 "bdev_raid_set_options", 00:07:00.521 "bdev_raid_remove_base_bdev", 00:07:00.521 "bdev_raid_add_base_bdev", 00:07:00.521 "bdev_raid_delete", 00:07:00.521 "bdev_raid_create", 00:07:00.521 "bdev_raid_get_bdevs", 00:07:00.521 "bdev_error_inject_error", 00:07:00.521 "bdev_error_delete", 00:07:00.521 "bdev_error_create", 00:07:00.521 "bdev_split_delete", 00:07:00.522 "bdev_split_create", 00:07:00.522 "bdev_delay_delete", 00:07:00.522 "bdev_delay_create", 00:07:00.522 "bdev_delay_update_latency", 00:07:00.522 "bdev_zone_block_delete", 00:07:00.522 "bdev_zone_block_create", 00:07:00.522 "blobfs_create", 00:07:00.522 "blobfs_detect", 00:07:00.522 "blobfs_set_cache_size", 00:07:00.522 "bdev_aio_delete", 00:07:00.522 "bdev_aio_rescan", 00:07:00.522 "bdev_aio_create", 00:07:00.522 "bdev_ftl_set_property", 00:07:00.522 "bdev_ftl_get_properties", 00:07:00.522 "bdev_ftl_get_stats", 00:07:00.522 "bdev_ftl_unmap", 00:07:00.522 "bdev_ftl_unload", 00:07:00.522 "bdev_ftl_delete", 00:07:00.522 "bdev_ftl_load", 00:07:00.522 "bdev_ftl_create", 00:07:00.522 "bdev_virtio_attach_controller", 00:07:00.522 "bdev_virtio_scsi_get_devices", 00:07:00.522 "bdev_virtio_detach_controller", 00:07:00.522 "bdev_virtio_blk_set_hotplug", 00:07:00.522 "bdev_iscsi_delete", 00:07:00.522 "bdev_iscsi_create", 00:07:00.522 "bdev_iscsi_set_options", 00:07:00.522 "accel_error_inject_error", 00:07:00.522 "ioat_scan_accel_module", 00:07:00.522 "dsa_scan_accel_module", 00:07:00.522 "iaa_scan_accel_module", 00:07:00.522 "vfu_virtio_create_fs_endpoint", 00:07:00.522 "vfu_virtio_create_scsi_endpoint", 00:07:00.522 "vfu_virtio_scsi_remove_target", 00:07:00.522 "vfu_virtio_scsi_add_target", 00:07:00.522 "vfu_virtio_create_blk_endpoint", 00:07:00.522 "vfu_virtio_delete_endpoint", 00:07:00.522 "keyring_file_remove_key", 00:07:00.522 "keyring_file_add_key", 00:07:00.522 "keyring_linux_set_options", 00:07:00.522 "fsdev_aio_delete", 00:07:00.522 "fsdev_aio_create", 00:07:00.522 "iscsi_get_histogram", 00:07:00.522 "iscsi_enable_histogram", 00:07:00.522 "iscsi_set_options", 00:07:00.522 "iscsi_get_auth_groups", 00:07:00.522 "iscsi_auth_group_remove_secret", 00:07:00.522 "iscsi_auth_group_add_secret", 00:07:00.522 "iscsi_delete_auth_group", 00:07:00.522 "iscsi_create_auth_group", 00:07:00.522 "iscsi_set_discovery_auth", 00:07:00.522 "iscsi_get_options", 00:07:00.522 "iscsi_target_node_request_logout", 00:07:00.522 "iscsi_target_node_set_redirect", 00:07:00.522 "iscsi_target_node_set_auth", 00:07:00.522 "iscsi_target_node_add_lun", 00:07:00.522 "iscsi_get_stats", 00:07:00.522 "iscsi_get_connections", 00:07:00.522 "iscsi_portal_group_set_auth", 00:07:00.522 "iscsi_start_portal_group", 00:07:00.522 "iscsi_delete_portal_group", 00:07:00.522 "iscsi_create_portal_group", 00:07:00.522 "iscsi_get_portal_groups", 00:07:00.522 "iscsi_delete_target_node", 00:07:00.522 "iscsi_target_node_remove_pg_ig_maps", 00:07:00.522 "iscsi_target_node_add_pg_ig_maps", 00:07:00.522 "iscsi_create_target_node", 00:07:00.522 "iscsi_get_target_nodes", 00:07:00.522 "iscsi_delete_initiator_group", 00:07:00.522 "iscsi_initiator_group_remove_initiators", 00:07:00.522 "iscsi_initiator_group_add_initiators", 00:07:00.522 "iscsi_create_initiator_group", 00:07:00.522 "iscsi_get_initiator_groups", 00:07:00.522 "nvmf_set_crdt", 00:07:00.522 "nvmf_set_config", 00:07:00.522 "nvmf_set_max_subsystems", 00:07:00.522 "nvmf_stop_mdns_prr", 00:07:00.522 "nvmf_publish_mdns_prr", 00:07:00.522 "nvmf_subsystem_get_listeners", 00:07:00.522 "nvmf_subsystem_get_qpairs", 00:07:00.522 "nvmf_subsystem_get_controllers", 00:07:00.522 "nvmf_get_stats", 00:07:00.522 "nvmf_get_transports", 00:07:00.522 "nvmf_create_transport", 00:07:00.522 "nvmf_get_targets", 00:07:00.522 "nvmf_delete_target", 00:07:00.522 "nvmf_create_target", 00:07:00.522 "nvmf_subsystem_allow_any_host", 00:07:00.522 "nvmf_subsystem_set_keys", 00:07:00.522 "nvmf_subsystem_remove_host", 00:07:00.522 "nvmf_subsystem_add_host", 00:07:00.522 "nvmf_ns_remove_host", 00:07:00.522 "nvmf_ns_add_host", 00:07:00.522 "nvmf_subsystem_remove_ns", 00:07:00.522 "nvmf_subsystem_set_ns_ana_group", 00:07:00.522 "nvmf_subsystem_add_ns", 00:07:00.522 "nvmf_subsystem_listener_set_ana_state", 00:07:00.522 "nvmf_discovery_get_referrals", 00:07:00.522 "nvmf_discovery_remove_referral", 00:07:00.522 "nvmf_discovery_add_referral", 00:07:00.522 "nvmf_subsystem_remove_listener", 00:07:00.522 "nvmf_subsystem_add_listener", 00:07:00.522 "nvmf_delete_subsystem", 00:07:00.522 "nvmf_create_subsystem", 00:07:00.522 "nvmf_get_subsystems", 00:07:00.522 "env_dpdk_get_mem_stats", 00:07:00.522 "nbd_get_disks", 00:07:00.522 "nbd_stop_disk", 00:07:00.522 "nbd_start_disk", 00:07:00.522 "ublk_recover_disk", 00:07:00.522 "ublk_get_disks", 00:07:00.522 "ublk_stop_disk", 00:07:00.522 "ublk_start_disk", 00:07:00.522 "ublk_destroy_target", 00:07:00.522 "ublk_create_target", 00:07:00.522 "virtio_blk_create_transport", 00:07:00.522 "virtio_blk_get_transports", 00:07:00.522 "vhost_controller_set_coalescing", 00:07:00.522 "vhost_get_controllers", 00:07:00.522 "vhost_delete_controller", 00:07:00.522 "vhost_create_blk_controller", 00:07:00.522 "vhost_scsi_controller_remove_target", 00:07:00.522 "vhost_scsi_controller_add_target", 00:07:00.522 "vhost_start_scsi_controller", 00:07:00.522 "vhost_create_scsi_controller", 00:07:00.522 "thread_set_cpumask", 00:07:00.522 "scheduler_set_options", 00:07:00.522 "framework_get_governor", 00:07:00.522 "framework_get_scheduler", 00:07:00.522 "framework_set_scheduler", 00:07:00.522 "framework_get_reactors", 00:07:00.522 "thread_get_io_channels", 00:07:00.522 "thread_get_pollers", 00:07:00.522 "thread_get_stats", 00:07:00.522 "framework_monitor_context_switch", 00:07:00.522 "spdk_kill_instance", 00:07:00.522 "log_enable_timestamps", 00:07:00.522 "log_get_flags", 00:07:00.522 "log_clear_flag", 00:07:00.522 "log_set_flag", 00:07:00.522 "log_get_level", 00:07:00.522 "log_set_level", 00:07:00.522 "log_get_print_level", 00:07:00.522 "log_set_print_level", 00:07:00.522 "framework_enable_cpumask_locks", 00:07:00.522 "framework_disable_cpumask_locks", 00:07:00.522 "framework_wait_init", 00:07:00.522 "framework_start_init", 00:07:00.522 "scsi_get_devices", 00:07:00.522 "bdev_get_histogram", 00:07:00.522 "bdev_enable_histogram", 00:07:00.522 "bdev_set_qos_limit", 00:07:00.522 "bdev_set_qd_sampling_period", 00:07:00.522 "bdev_get_bdevs", 00:07:00.522 "bdev_reset_iostat", 00:07:00.522 "bdev_get_iostat", 00:07:00.522 "bdev_examine", 00:07:00.522 "bdev_wait_for_examine", 00:07:00.522 "bdev_set_options", 00:07:00.522 "accel_get_stats", 00:07:00.522 "accel_set_options", 00:07:00.522 "accel_set_driver", 00:07:00.522 "accel_crypto_key_destroy", 00:07:00.522 "accel_crypto_keys_get", 00:07:00.522 "accel_crypto_key_create", 00:07:00.522 "accel_assign_opc", 00:07:00.522 "accel_get_module_info", 00:07:00.522 "accel_get_opc_assignments", 00:07:00.522 "vmd_rescan", 00:07:00.522 "vmd_remove_device", 00:07:00.522 "vmd_enable", 00:07:00.522 "sock_get_default_impl", 00:07:00.522 "sock_set_default_impl", 00:07:00.522 "sock_impl_set_options", 00:07:00.522 "sock_impl_get_options", 00:07:00.522 "iobuf_get_stats", 00:07:00.522 "iobuf_set_options", 00:07:00.522 "keyring_get_keys", 00:07:00.522 "vfu_tgt_set_base_path", 00:07:00.522 "framework_get_pci_devices", 00:07:00.522 "framework_get_config", 00:07:00.522 "framework_get_subsystems", 00:07:00.522 "fsdev_set_opts", 00:07:00.522 "fsdev_get_opts", 00:07:00.522 "trace_get_info", 00:07:00.522 "trace_get_tpoint_group_mask", 00:07:00.522 "trace_disable_tpoint_group", 00:07:00.522 "trace_enable_tpoint_group", 00:07:00.522 "trace_clear_tpoint_mask", 00:07:00.522 "trace_set_tpoint_mask", 00:07:00.522 "notify_get_notifications", 00:07:00.522 "notify_get_types", 00:07:00.522 "spdk_get_version", 00:07:00.522 "rpc_get_methods" 00:07:00.522 ] 00:07:00.781 08:04:14 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:07:00.781 08:04:14 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:00.781 08:04:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:00.781 08:04:14 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:07:00.781 08:04:14 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 1507041 00:07:00.781 08:04:14 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 1507041 ']' 00:07:00.781 08:04:14 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 1507041 00:07:00.781 08:04:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:07:00.781 08:04:14 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:00.781 08:04:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1507041 00:07:00.781 08:04:14 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:00.781 08:04:14 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:00.781 08:04:14 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1507041' 00:07:00.781 killing process with pid 1507041 00:07:00.781 08:04:14 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 1507041 00:07:00.781 08:04:14 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 1507041 00:07:01.041 00:07:01.041 real 0m1.168s 00:07:01.041 user 0m1.988s 00:07:01.041 sys 0m0.437s 00:07:01.041 08:04:14 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:01.041 08:04:14 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:01.041 ************************************ 00:07:01.041 END TEST spdkcli_tcp 00:07:01.041 ************************************ 00:07:01.041 08:04:14 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:01.041 08:04:14 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:01.041 08:04:14 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:01.041 08:04:14 -- common/autotest_common.sh@10 -- # set +x 00:07:01.041 ************************************ 00:07:01.041 START TEST dpdk_mem_utility 00:07:01.041 ************************************ 00:07:01.041 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:07:01.300 * Looking for test storage... 00:07:01.300 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:07:01.300 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:01.300 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:07:01.300 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:01.300 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:01.300 08:04:15 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:07:01.300 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:01.300 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:01.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.300 --rc genhtml_branch_coverage=1 00:07:01.300 --rc genhtml_function_coverage=1 00:07:01.300 --rc genhtml_legend=1 00:07:01.300 --rc geninfo_all_blocks=1 00:07:01.300 --rc geninfo_unexecuted_blocks=1 00:07:01.300 00:07:01.300 ' 00:07:01.300 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:01.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.300 --rc genhtml_branch_coverage=1 00:07:01.300 --rc genhtml_function_coverage=1 00:07:01.300 --rc genhtml_legend=1 00:07:01.300 --rc geninfo_all_blocks=1 00:07:01.300 --rc geninfo_unexecuted_blocks=1 00:07:01.300 00:07:01.300 ' 00:07:01.300 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:01.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.300 --rc genhtml_branch_coverage=1 00:07:01.300 --rc genhtml_function_coverage=1 00:07:01.300 --rc genhtml_legend=1 00:07:01.300 --rc geninfo_all_blocks=1 00:07:01.300 --rc geninfo_unexecuted_blocks=1 00:07:01.300 00:07:01.300 ' 00:07:01.300 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:01.300 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:01.300 --rc genhtml_branch_coverage=1 00:07:01.300 --rc genhtml_function_coverage=1 00:07:01.300 --rc genhtml_legend=1 00:07:01.300 --rc geninfo_all_blocks=1 00:07:01.300 --rc geninfo_unexecuted_blocks=1 00:07:01.300 00:07:01.300 ' 00:07:01.300 08:04:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:01.300 08:04:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=1507351 00:07:01.300 08:04:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:07:01.301 08:04:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 1507351 00:07:01.301 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 1507351 ']' 00:07:01.301 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.301 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.301 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.301 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.301 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:01.301 [2024-11-20 08:04:15.251837] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:01.301 [2024-11-20 08:04:15.251883] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507351 ] 00:07:01.559 [2024-11-20 08:04:15.327023] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.559 [2024-11-20 08:04:15.368921] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.559 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.559 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:07:01.559 08:04:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:07:01.559 08:04:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:07:01.559 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.559 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:01.819 { 00:07:01.819 "filename": "/tmp/spdk_mem_dump.txt" 00:07:01.819 } 00:07:01.819 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.819 08:04:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:07:01.819 DPDK memory size 810.000000 MiB in 1 heap(s) 00:07:01.819 1 heaps totaling size 810.000000 MiB 00:07:01.819 size: 810.000000 MiB heap id: 0 00:07:01.819 end heaps---------- 00:07:01.819 9 mempools totaling size 595.772034 MiB 00:07:01.819 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:07:01.819 size: 158.602051 MiB name: PDU_data_out_Pool 00:07:01.819 size: 92.545471 MiB name: bdev_io_1507351 00:07:01.819 size: 50.003479 MiB name: msgpool_1507351 00:07:01.819 size: 36.509338 MiB name: fsdev_io_1507351 00:07:01.819 size: 21.763794 MiB name: PDU_Pool 00:07:01.819 size: 19.513306 MiB name: SCSI_TASK_Pool 00:07:01.819 size: 4.133484 MiB name: evtpool_1507351 00:07:01.819 size: 0.026123 MiB name: Session_Pool 00:07:01.819 end mempools------- 00:07:01.819 6 memzones totaling size 4.142822 MiB 00:07:01.819 size: 1.000366 MiB name: RG_ring_0_1507351 00:07:01.819 size: 1.000366 MiB name: RG_ring_1_1507351 00:07:01.819 size: 1.000366 MiB name: RG_ring_4_1507351 00:07:01.819 size: 1.000366 MiB name: RG_ring_5_1507351 00:07:01.819 size: 0.125366 MiB name: RG_ring_2_1507351 00:07:01.819 size: 0.015991 MiB name: RG_ring_3_1507351 00:07:01.819 end memzones------- 00:07:01.819 08:04:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:07:01.819 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:07:01.819 list of free elements. size: 10.862488 MiB 00:07:01.819 element at address: 0x200018a00000 with size: 0.999878 MiB 00:07:01.819 element at address: 0x200018c00000 with size: 0.999878 MiB 00:07:01.819 element at address: 0x200000400000 with size: 0.998535 MiB 00:07:01.819 element at address: 0x200031800000 with size: 0.994446 MiB 00:07:01.819 element at address: 0x200006400000 with size: 0.959839 MiB 00:07:01.819 element at address: 0x200012c00000 with size: 0.954285 MiB 00:07:01.819 element at address: 0x200018e00000 with size: 0.936584 MiB 00:07:01.819 element at address: 0x200000200000 with size: 0.717346 MiB 00:07:01.819 element at address: 0x20001a600000 with size: 0.582886 MiB 00:07:01.819 element at address: 0x200000c00000 with size: 0.495422 MiB 00:07:01.819 element at address: 0x20000a600000 with size: 0.490723 MiB 00:07:01.819 element at address: 0x200019000000 with size: 0.485657 MiB 00:07:01.819 element at address: 0x200003e00000 with size: 0.481934 MiB 00:07:01.819 element at address: 0x200027a00000 with size: 0.410034 MiB 00:07:01.819 element at address: 0x200000800000 with size: 0.355042 MiB 00:07:01.819 list of standard malloc elements. size: 199.218628 MiB 00:07:01.819 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:07:01.819 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:07:01.819 element at address: 0x200018afff80 with size: 1.000122 MiB 00:07:01.819 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:07:01.819 element at address: 0x200018efff80 with size: 1.000122 MiB 00:07:01.819 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:07:01.819 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:07:01.819 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:07:01.819 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:07:01.819 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:07:01.819 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:07:01.819 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:07:01.819 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:07:01.819 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:07:01.819 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:07:01.819 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:07:01.819 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:07:01.819 element at address: 0x20000085b040 with size: 0.000183 MiB 00:07:01.819 element at address: 0x20000085f300 with size: 0.000183 MiB 00:07:01.819 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:07:01.819 element at address: 0x20000087f680 with size: 0.000183 MiB 00:07:01.819 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:07:01.819 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:07:01.819 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:07:01.819 element at address: 0x200000cff000 with size: 0.000183 MiB 00:07:01.819 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:07:01.819 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:07:01.819 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:07:01.819 element at address: 0x200003efb980 with size: 0.000183 MiB 00:07:01.819 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:07:01.819 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:07:01.819 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:07:01.819 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:07:01.819 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:07:01.819 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:07:01.819 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:07:01.819 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:07:01.819 element at address: 0x20001a695380 with size: 0.000183 MiB 00:07:01.819 element at address: 0x20001a695440 with size: 0.000183 MiB 00:07:01.819 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:07:01.819 element at address: 0x200027a69040 with size: 0.000183 MiB 00:07:01.819 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:07:01.819 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:07:01.820 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:07:01.820 list of memzone associated elements. size: 599.918884 MiB 00:07:01.820 element at address: 0x20001a695500 with size: 211.416748 MiB 00:07:01.820 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:07:01.820 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:07:01.820 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:07:01.820 element at address: 0x200012df4780 with size: 92.045044 MiB 00:07:01.820 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_1507351_0 00:07:01.820 element at address: 0x200000dff380 with size: 48.003052 MiB 00:07:01.820 associated memzone info: size: 48.002930 MiB name: MP_msgpool_1507351_0 00:07:01.820 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:07:01.820 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_1507351_0 00:07:01.820 element at address: 0x2000191be940 with size: 20.255554 MiB 00:07:01.820 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:07:01.820 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:07:01.820 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:07:01.820 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:07:01.820 associated memzone info: size: 3.000122 MiB name: MP_evtpool_1507351_0 00:07:01.820 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:07:01.820 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_1507351 00:07:01.820 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:07:01.820 associated memzone info: size: 1.007996 MiB name: MP_evtpool_1507351 00:07:01.820 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:07:01.820 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:07:01.820 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:07:01.820 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:07:01.820 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:07:01.820 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:07:01.820 element at address: 0x200003efba40 with size: 1.008118 MiB 00:07:01.820 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:07:01.820 element at address: 0x200000cff180 with size: 1.000488 MiB 00:07:01.820 associated memzone info: size: 1.000366 MiB name: RG_ring_0_1507351 00:07:01.820 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:07:01.820 associated memzone info: size: 1.000366 MiB name: RG_ring_1_1507351 00:07:01.820 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:07:01.820 associated memzone info: size: 1.000366 MiB name: RG_ring_4_1507351 00:07:01.820 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:07:01.820 associated memzone info: size: 1.000366 MiB name: RG_ring_5_1507351 00:07:01.820 element at address: 0x20000087f740 with size: 0.500488 MiB 00:07:01.820 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_1507351 00:07:01.820 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:07:01.820 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_1507351 00:07:01.820 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:07:01.820 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:07:01.820 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:07:01.820 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:07:01.820 element at address: 0x20001907c540 with size: 0.250488 MiB 00:07:01.820 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:07:01.820 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:07:01.820 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_1507351 00:07:01.820 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:07:01.820 associated memzone info: size: 0.125366 MiB name: RG_ring_2_1507351 00:07:01.820 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:07:01.820 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:07:01.820 element at address: 0x200027a69100 with size: 0.023743 MiB 00:07:01.820 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:07:01.820 element at address: 0x20000085b100 with size: 0.016113 MiB 00:07:01.820 associated memzone info: size: 0.015991 MiB name: RG_ring_3_1507351 00:07:01.820 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:07:01.820 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:07:01.820 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:07:01.820 associated memzone info: size: 0.000183 MiB name: MP_msgpool_1507351 00:07:01.820 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:07:01.820 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_1507351 00:07:01.820 element at address: 0x20000085af00 with size: 0.000305 MiB 00:07:01.820 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_1507351 00:07:01.820 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:07:01.820 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:07:01.820 08:04:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:07:01.820 08:04:15 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 1507351 00:07:01.820 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 1507351 ']' 00:07:01.820 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 1507351 00:07:01.820 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:07:01.820 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:01.820 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1507351 00:07:01.820 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:01.820 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:01.820 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1507351' 00:07:01.820 killing process with pid 1507351 00:07:01.820 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 1507351 00:07:01.820 08:04:15 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 1507351 00:07:02.079 00:07:02.079 real 0m1.019s 00:07:02.079 user 0m0.934s 00:07:02.079 sys 0m0.422s 00:07:02.079 08:04:16 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:02.079 08:04:16 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:07:02.079 ************************************ 00:07:02.079 END TEST dpdk_mem_utility 00:07:02.079 ************************************ 00:07:02.079 08:04:16 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:02.079 08:04:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:02.079 08:04:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.079 08:04:16 -- common/autotest_common.sh@10 -- # set +x 00:07:02.338 ************************************ 00:07:02.338 START TEST event 00:07:02.338 ************************************ 00:07:02.338 08:04:16 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:07:02.338 * Looking for test storage... 00:07:02.338 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:02.338 08:04:16 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:02.338 08:04:16 event -- common/autotest_common.sh@1693 -- # lcov --version 00:07:02.338 08:04:16 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:02.338 08:04:16 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:02.338 08:04:16 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:02.338 08:04:16 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:02.338 08:04:16 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:02.338 08:04:16 event -- scripts/common.sh@336 -- # IFS=.-: 00:07:02.338 08:04:16 event -- scripts/common.sh@336 -- # read -ra ver1 00:07:02.338 08:04:16 event -- scripts/common.sh@337 -- # IFS=.-: 00:07:02.338 08:04:16 event -- scripts/common.sh@337 -- # read -ra ver2 00:07:02.338 08:04:16 event -- scripts/common.sh@338 -- # local 'op=<' 00:07:02.338 08:04:16 event -- scripts/common.sh@340 -- # ver1_l=2 00:07:02.338 08:04:16 event -- scripts/common.sh@341 -- # ver2_l=1 00:07:02.338 08:04:16 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:02.338 08:04:16 event -- scripts/common.sh@344 -- # case "$op" in 00:07:02.338 08:04:16 event -- scripts/common.sh@345 -- # : 1 00:07:02.338 08:04:16 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:02.338 08:04:16 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:02.338 08:04:16 event -- scripts/common.sh@365 -- # decimal 1 00:07:02.338 08:04:16 event -- scripts/common.sh@353 -- # local d=1 00:07:02.338 08:04:16 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:02.338 08:04:16 event -- scripts/common.sh@355 -- # echo 1 00:07:02.338 08:04:16 event -- scripts/common.sh@365 -- # ver1[v]=1 00:07:02.338 08:04:16 event -- scripts/common.sh@366 -- # decimal 2 00:07:02.338 08:04:16 event -- scripts/common.sh@353 -- # local d=2 00:07:02.338 08:04:16 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:02.338 08:04:16 event -- scripts/common.sh@355 -- # echo 2 00:07:02.338 08:04:16 event -- scripts/common.sh@366 -- # ver2[v]=2 00:07:02.338 08:04:16 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:02.338 08:04:16 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:02.338 08:04:16 event -- scripts/common.sh@368 -- # return 0 00:07:02.338 08:04:16 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:02.338 08:04:16 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:02.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.338 --rc genhtml_branch_coverage=1 00:07:02.338 --rc genhtml_function_coverage=1 00:07:02.338 --rc genhtml_legend=1 00:07:02.338 --rc geninfo_all_blocks=1 00:07:02.338 --rc geninfo_unexecuted_blocks=1 00:07:02.338 00:07:02.338 ' 00:07:02.338 08:04:16 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:02.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.338 --rc genhtml_branch_coverage=1 00:07:02.338 --rc genhtml_function_coverage=1 00:07:02.338 --rc genhtml_legend=1 00:07:02.338 --rc geninfo_all_blocks=1 00:07:02.338 --rc geninfo_unexecuted_blocks=1 00:07:02.338 00:07:02.338 ' 00:07:02.338 08:04:16 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:02.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.338 --rc genhtml_branch_coverage=1 00:07:02.338 --rc genhtml_function_coverage=1 00:07:02.338 --rc genhtml_legend=1 00:07:02.338 --rc geninfo_all_blocks=1 00:07:02.338 --rc geninfo_unexecuted_blocks=1 00:07:02.338 00:07:02.338 ' 00:07:02.338 08:04:16 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:02.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:02.338 --rc genhtml_branch_coverage=1 00:07:02.339 --rc genhtml_function_coverage=1 00:07:02.339 --rc genhtml_legend=1 00:07:02.339 --rc geninfo_all_blocks=1 00:07:02.339 --rc geninfo_unexecuted_blocks=1 00:07:02.339 00:07:02.339 ' 00:07:02.339 08:04:16 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:07:02.339 08:04:16 event -- bdev/nbd_common.sh@6 -- # set -e 00:07:02.339 08:04:16 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:02.339 08:04:16 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:02.339 08:04:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:02.339 08:04:16 event -- common/autotest_common.sh@10 -- # set +x 00:07:02.339 ************************************ 00:07:02.339 START TEST event_perf 00:07:02.339 ************************************ 00:07:02.339 08:04:16 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:07:02.339 Running I/O for 1 seconds...[2024-11-20 08:04:16.352648] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:02.339 [2024-11-20 08:04:16.352716] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507641 ] 00:07:02.597 [2024-11-20 08:04:16.434518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:02.597 [2024-11-20 08:04:16.477902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:02.597 [2024-11-20 08:04:16.478014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:02.597 [2024-11-20 08:04:16.478120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.597 [2024-11-20 08:04:16.478121] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:03.534 Running I/O for 1 seconds... 00:07:03.534 lcore 0: 203275 00:07:03.534 lcore 1: 203272 00:07:03.534 lcore 2: 203275 00:07:03.534 lcore 3: 203275 00:07:03.534 done. 00:07:03.534 00:07:03.534 real 0m1.186s 00:07:03.534 user 0m4.098s 00:07:03.534 sys 0m0.084s 00:07:03.534 08:04:17 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:03.534 08:04:17 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:07:03.534 ************************************ 00:07:03.534 END TEST event_perf 00:07:03.534 ************************************ 00:07:03.534 08:04:17 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:03.534 08:04:17 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:03.534 08:04:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:03.534 08:04:17 event -- common/autotest_common.sh@10 -- # set +x 00:07:03.793 ************************************ 00:07:03.793 START TEST event_reactor 00:07:03.793 ************************************ 00:07:03.793 08:04:17 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:07:03.793 [2024-11-20 08:04:17.609139] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:03.793 [2024-11-20 08:04:17.609217] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1507890 ] 00:07:03.793 [2024-11-20 08:04:17.685963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.793 [2024-11-20 08:04:17.726117] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.171 test_start 00:07:05.171 oneshot 00:07:05.171 tick 100 00:07:05.171 tick 100 00:07:05.171 tick 250 00:07:05.171 tick 100 00:07:05.171 tick 100 00:07:05.171 tick 100 00:07:05.171 tick 250 00:07:05.171 tick 500 00:07:05.171 tick 100 00:07:05.171 tick 100 00:07:05.171 tick 250 00:07:05.171 tick 100 00:07:05.171 tick 100 00:07:05.171 test_end 00:07:05.171 00:07:05.171 real 0m1.174s 00:07:05.171 user 0m1.097s 00:07:05.171 sys 0m0.073s 00:07:05.171 08:04:18 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:05.171 08:04:18 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:07:05.171 ************************************ 00:07:05.171 END TEST event_reactor 00:07:05.171 ************************************ 00:07:05.171 08:04:18 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:05.171 08:04:18 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:05.171 08:04:18 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:05.171 08:04:18 event -- common/autotest_common.sh@10 -- # set +x 00:07:05.171 ************************************ 00:07:05.171 START TEST event_reactor_perf 00:07:05.171 ************************************ 00:07:05.171 08:04:18 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:07:05.171 [2024-11-20 08:04:18.851899] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:05.171 [2024-11-20 08:04:18.851969] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508142 ] 00:07:05.171 [2024-11-20 08:04:18.926976] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.171 [2024-11-20 08:04:18.966158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.107 test_start 00:07:06.107 test_end 00:07:06.107 Performance: 515910 events per second 00:07:06.107 00:07:06.107 real 0m1.175s 00:07:06.107 user 0m1.099s 00:07:06.107 sys 0m0.073s 00:07:06.107 08:04:20 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:06.107 08:04:20 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:07:06.107 ************************************ 00:07:06.107 END TEST event_reactor_perf 00:07:06.107 ************************************ 00:07:06.107 08:04:20 event -- event/event.sh@49 -- # uname -s 00:07:06.107 08:04:20 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:07:06.107 08:04:20 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:06.107 08:04:20 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.107 08:04:20 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.107 08:04:20 event -- common/autotest_common.sh@10 -- # set +x 00:07:06.107 ************************************ 00:07:06.107 START TEST event_scheduler 00:07:06.107 ************************************ 00:07:06.108 08:04:20 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:07:06.367 * Looking for test storage... 00:07:06.367 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:07:06.367 08:04:20 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:06.367 08:04:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:07:06.367 08:04:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:06.367 08:04:20 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:06.367 08:04:20 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:07:06.367 08:04:20 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:06.367 08:04:20 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:06.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.367 --rc genhtml_branch_coverage=1 00:07:06.367 --rc genhtml_function_coverage=1 00:07:06.367 --rc genhtml_legend=1 00:07:06.367 --rc geninfo_all_blocks=1 00:07:06.367 --rc geninfo_unexecuted_blocks=1 00:07:06.367 00:07:06.367 ' 00:07:06.367 08:04:20 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:06.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.367 --rc genhtml_branch_coverage=1 00:07:06.367 --rc genhtml_function_coverage=1 00:07:06.367 --rc genhtml_legend=1 00:07:06.367 --rc geninfo_all_blocks=1 00:07:06.367 --rc geninfo_unexecuted_blocks=1 00:07:06.367 00:07:06.367 ' 00:07:06.367 08:04:20 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:06.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.367 --rc genhtml_branch_coverage=1 00:07:06.367 --rc genhtml_function_coverage=1 00:07:06.367 --rc genhtml_legend=1 00:07:06.367 --rc geninfo_all_blocks=1 00:07:06.367 --rc geninfo_unexecuted_blocks=1 00:07:06.367 00:07:06.367 ' 00:07:06.367 08:04:20 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:06.367 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:06.367 --rc genhtml_branch_coverage=1 00:07:06.367 --rc genhtml_function_coverage=1 00:07:06.367 --rc genhtml_legend=1 00:07:06.367 --rc geninfo_all_blocks=1 00:07:06.367 --rc geninfo_unexecuted_blocks=1 00:07:06.367 00:07:06.367 ' 00:07:06.367 08:04:20 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:07:06.367 08:04:20 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=1508431 00:07:06.367 08:04:20 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:07:06.367 08:04:20 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:07:06.367 08:04:20 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 1508431 00:07:06.367 08:04:20 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 1508431 ']' 00:07:06.367 08:04:20 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.367 08:04:20 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:06.367 08:04:20 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.367 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.367 08:04:20 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:06.367 08:04:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:06.367 [2024-11-20 08:04:20.299652] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:06.367 [2024-11-20 08:04:20.299700] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1508431 ] 00:07:06.367 [2024-11-20 08:04:20.356295] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:06.627 [2024-11-20 08:04:20.402657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.627 [2024-11-20 08:04:20.402767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:06.627 [2024-11-20 08:04:20.402862] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:06.627 [2024-11-20 08:04:20.402864] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:06.627 08:04:20 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:06.627 08:04:20 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:07:06.627 08:04:20 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:07:06.627 08:04:20 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.627 08:04:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:06.627 [2024-11-20 08:04:20.459471] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:07:06.627 [2024-11-20 08:04:20.459487] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:07:06.627 [2024-11-20 08:04:20.459497] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:07:06.627 [2024-11-20 08:04:20.459502] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:07:06.627 [2024-11-20 08:04:20.459507] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:07:06.627 08:04:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.627 08:04:20 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:07:06.627 08:04:20 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.627 08:04:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:06.627 [2024-11-20 08:04:20.533266] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:07:06.627 08:04:20 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.627 08:04:20 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:07:06.627 08:04:20 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:06.627 08:04:20 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:06.627 08:04:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:06.627 ************************************ 00:07:06.627 START TEST scheduler_create_thread 00:07:06.627 ************************************ 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.627 2 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.627 3 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.627 4 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.627 5 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.627 6 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.627 7 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.627 8 00:07:06.627 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.628 08:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:07:06.628 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.628 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.628 9 00:07:06.628 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.628 08:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:07:06.628 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.628 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.886 10 00:07:06.886 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.886 08:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:07:06.886 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.886 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:06.886 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:06.886 08:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:07:06.886 08:04:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:07:06.886 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:06.886 08:04:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:07.145 08:04:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:07.145 08:04:21 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:07:07.145 08:04:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:07.145 08:04:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:09.077 08:04:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:09.077 08:04:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:07:09.077 08:04:22 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:07:09.077 08:04:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:09.077 08:04:22 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.011 08:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:10.011 00:07:10.011 real 0m3.101s 00:07:10.011 user 0m0.022s 00:07:10.011 sys 0m0.006s 00:07:10.011 08:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.011 08:04:23 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:07:10.011 ************************************ 00:07:10.011 END TEST scheduler_create_thread 00:07:10.011 ************************************ 00:07:10.011 08:04:23 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:07:10.011 08:04:23 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 1508431 00:07:10.011 08:04:23 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 1508431 ']' 00:07:10.011 08:04:23 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 1508431 00:07:10.011 08:04:23 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:07:10.011 08:04:23 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:10.011 08:04:23 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1508431 00:07:10.011 08:04:23 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:10.011 08:04:23 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:10.011 08:04:23 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1508431' 00:07:10.011 killing process with pid 1508431 00:07:10.011 08:04:23 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 1508431 00:07:10.011 08:04:23 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 1508431 00:07:10.269 [2024-11-20 08:04:24.052433] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:07:10.269 00:07:10.269 real 0m4.151s 00:07:10.269 user 0m6.699s 00:07:10.269 sys 0m0.350s 00:07:10.269 08:04:24 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.269 08:04:24 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:07:10.269 ************************************ 00:07:10.269 END TEST event_scheduler 00:07:10.269 ************************************ 00:07:10.269 08:04:24 event -- event/event.sh@51 -- # modprobe -n nbd 00:07:10.269 08:04:24 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:07:10.269 08:04:24 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:10.269 08:04:24 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.269 08:04:24 event -- common/autotest_common.sh@10 -- # set +x 00:07:10.528 ************************************ 00:07:10.528 START TEST app_repeat 00:07:10.528 ************************************ 00:07:10.528 08:04:24 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:07:10.528 08:04:24 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:10.528 08:04:24 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:10.528 08:04:24 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:07:10.528 08:04:24 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:10.528 08:04:24 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:07:10.528 08:04:24 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:07:10.528 08:04:24 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:07:10.528 08:04:24 event.app_repeat -- event/event.sh@19 -- # repeat_pid=1509142 00:07:10.528 08:04:24 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:07:10.528 08:04:24 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:07:10.528 08:04:24 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 1509142' 00:07:10.528 Process app_repeat pid: 1509142 00:07:10.528 08:04:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:10.528 08:04:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:07:10.528 spdk_app_start Round 0 00:07:10.528 08:04:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1509142 /var/tmp/spdk-nbd.sock 00:07:10.528 08:04:24 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1509142 ']' 00:07:10.528 08:04:24 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:10.528 08:04:24 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:10.528 08:04:24 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:10.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:10.528 08:04:24 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:10.528 08:04:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:10.528 [2024-11-20 08:04:24.344149] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:10.528 [2024-11-20 08:04:24.344199] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1509142 ] 00:07:10.528 [2024-11-20 08:04:24.417690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:10.528 [2024-11-20 08:04:24.461735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:10.528 [2024-11-20 08:04:24.461736] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.528 08:04:24 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:10.528 08:04:24 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:10.528 08:04:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:10.787 Malloc0 00:07:10.787 08:04:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:11.046 Malloc1 00:07:11.046 08:04:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.046 08:04:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.046 08:04:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.046 08:04:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:11.046 08:04:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.046 08:04:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:11.046 08:04:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:11.046 08:04:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.046 08:04:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:11.046 08:04:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:11.046 08:04:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.046 08:04:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:11.046 08:04:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:11.046 08:04:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:11.046 08:04:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.046 08:04:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:11.305 /dev/nbd0 00:07:11.305 08:04:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:11.305 08:04:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:11.305 08:04:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:11.305 08:04:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:11.305 08:04:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.305 08:04:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.305 08:04:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:11.305 08:04:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:11.305 08:04:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.305 08:04:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.305 08:04:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.305 1+0 records in 00:07:11.305 1+0 records out 00:07:11.305 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226959 s, 18.0 MB/s 00:07:11.305 08:04:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:11.305 08:04:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:11.305 08:04:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:11.305 08:04:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.305 08:04:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:11.305 08:04:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.305 08:04:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.305 08:04:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:11.563 /dev/nbd1 00:07:11.563 08:04:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:11.563 08:04:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:11.563 08:04:25 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:11.563 08:04:25 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:11.563 08:04:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:11.563 08:04:25 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:11.563 08:04:25 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:11.563 08:04:25 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:11.563 08:04:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:11.563 08:04:25 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:11.563 08:04:25 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:11.563 1+0 records in 00:07:11.563 1+0 records out 00:07:11.563 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000204642 s, 20.0 MB/s 00:07:11.563 08:04:25 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:11.564 08:04:25 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:11.564 08:04:25 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:11.564 08:04:25 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:11.564 08:04:25 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:11.564 08:04:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:11.564 08:04:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:11.564 08:04:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:11.564 08:04:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.564 08:04:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:11.823 { 00:07:11.823 "nbd_device": "/dev/nbd0", 00:07:11.823 "bdev_name": "Malloc0" 00:07:11.823 }, 00:07:11.823 { 00:07:11.823 "nbd_device": "/dev/nbd1", 00:07:11.823 "bdev_name": "Malloc1" 00:07:11.823 } 00:07:11.823 ]' 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:11.823 { 00:07:11.823 "nbd_device": "/dev/nbd0", 00:07:11.823 "bdev_name": "Malloc0" 00:07:11.823 }, 00:07:11.823 { 00:07:11.823 "nbd_device": "/dev/nbd1", 00:07:11.823 "bdev_name": "Malloc1" 00:07:11.823 } 00:07:11.823 ]' 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:11.823 /dev/nbd1' 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:11.823 /dev/nbd1' 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:11.823 256+0 records in 00:07:11.823 256+0 records out 00:07:11.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0105 s, 99.9 MB/s 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:11.823 256+0 records in 00:07:11.823 256+0 records out 00:07:11.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137218 s, 76.4 MB/s 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:11.823 256+0 records in 00:07:11.823 256+0 records out 00:07:11.823 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152503 s, 68.8 MB/s 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:11.823 08:04:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.824 08:04:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:11.824 08:04:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:11.824 08:04:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:11.824 08:04:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:11.824 08:04:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:11.824 08:04:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:11.824 08:04:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:11.824 08:04:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:11.824 08:04:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:11.824 08:04:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:11.824 08:04:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:12.082 08:04:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:12.082 08:04:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:12.082 08:04:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:12.083 08:04:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.083 08:04:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.083 08:04:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:12.083 08:04:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:12.083 08:04:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.083 08:04:26 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:12.083 08:04:26 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:12.341 08:04:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:12.341 08:04:26 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:12.341 08:04:26 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:12.341 08:04:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:12.341 08:04:26 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:12.341 08:04:26 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:12.341 08:04:26 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:12.341 08:04:26 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:12.341 08:04:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:12.341 08:04:26 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:12.341 08:04:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:12.601 08:04:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:12.601 08:04:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:12.601 08:04:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:12.601 08:04:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:12.601 08:04:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:12.601 08:04:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:12.601 08:04:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:12.601 08:04:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:12.601 08:04:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:12.601 08:04:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:12.601 08:04:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:12.601 08:04:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:12.601 08:04:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:12.859 08:04:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:12.859 [2024-11-20 08:04:26.832661] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:12.859 [2024-11-20 08:04:26.869535] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:12.859 [2024-11-20 08:04:26.869536] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:13.118 [2024-11-20 08:04:26.909349] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:13.118 [2024-11-20 08:04:26.909386] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:16.407 08:04:29 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:16.407 08:04:29 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:07:16.407 spdk_app_start Round 1 00:07:16.407 08:04:29 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1509142 /var/tmp/spdk-nbd.sock 00:07:16.407 08:04:29 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1509142 ']' 00:07:16.407 08:04:29 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:16.407 08:04:29 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:16.407 08:04:29 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:16.407 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:16.407 08:04:29 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:16.407 08:04:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:16.407 08:04:29 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:16.407 08:04:29 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:16.407 08:04:29 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:16.407 Malloc0 00:07:16.407 08:04:30 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:16.407 Malloc1 00:07:16.407 08:04:30 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:16.407 08:04:30 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.407 08:04:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:16.407 08:04:30 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:16.407 08:04:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.407 08:04:30 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:16.407 08:04:30 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:16.407 08:04:30 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.407 08:04:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:16.407 08:04:30 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:16.407 08:04:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:16.408 08:04:30 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:16.408 08:04:30 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:16.408 08:04:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:16.408 08:04:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:16.408 08:04:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:16.666 /dev/nbd0 00:07:16.666 08:04:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:16.666 08:04:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:16.666 08:04:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:16.666 08:04:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:16.666 08:04:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.666 08:04:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.666 08:04:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:16.666 08:04:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:16.666 08:04:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.666 08:04:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.666 08:04:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:16.666 1+0 records in 00:07:16.666 1+0 records out 00:07:16.666 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000205562 s, 19.9 MB/s 00:07:16.666 08:04:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:16.666 08:04:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:16.666 08:04:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:16.666 08:04:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.666 08:04:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:16.666 08:04:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.666 08:04:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:16.666 08:04:30 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:16.925 /dev/nbd1 00:07:16.925 08:04:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:16.925 08:04:30 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:16.925 08:04:30 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:16.925 08:04:30 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:16.925 08:04:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:16.925 08:04:30 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:16.925 08:04:30 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:16.925 08:04:30 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:16.925 08:04:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:16.925 08:04:30 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:16.925 08:04:30 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:16.925 1+0 records in 00:07:16.925 1+0 records out 00:07:16.925 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238874 s, 17.1 MB/s 00:07:16.925 08:04:30 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:16.925 08:04:30 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:16.925 08:04:30 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:16.925 08:04:30 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:16.925 08:04:30 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:16.925 08:04:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:16.925 08:04:30 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:16.925 08:04:30 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:16.925 08:04:30 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:16.925 08:04:30 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:17.183 { 00:07:17.183 "nbd_device": "/dev/nbd0", 00:07:17.183 "bdev_name": "Malloc0" 00:07:17.183 }, 00:07:17.183 { 00:07:17.183 "nbd_device": "/dev/nbd1", 00:07:17.183 "bdev_name": "Malloc1" 00:07:17.183 } 00:07:17.183 ]' 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:17.183 { 00:07:17.183 "nbd_device": "/dev/nbd0", 00:07:17.183 "bdev_name": "Malloc0" 00:07:17.183 }, 00:07:17.183 { 00:07:17.183 "nbd_device": "/dev/nbd1", 00:07:17.183 "bdev_name": "Malloc1" 00:07:17.183 } 00:07:17.183 ]' 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:17.183 /dev/nbd1' 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:17.183 /dev/nbd1' 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:17.183 256+0 records in 00:07:17.183 256+0 records out 00:07:17.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0102173 s, 103 MB/s 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:17.183 256+0 records in 00:07:17.183 256+0 records out 00:07:17.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0134855 s, 77.8 MB/s 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:17.183 256+0 records in 00:07:17.183 256+0 records out 00:07:17.183 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145237 s, 72.2 MB/s 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:17.183 08:04:31 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:17.184 08:04:31 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:17.184 08:04:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:17.184 08:04:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:17.184 08:04:31 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:17.184 08:04:31 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:17.184 08:04:31 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:17.184 08:04:31 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:17.184 08:04:31 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.184 08:04:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:17.184 08:04:31 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:17.184 08:04:31 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:17.184 08:04:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.184 08:04:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:17.443 08:04:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:17.443 08:04:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:17.443 08:04:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:17.443 08:04:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.443 08:04:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.443 08:04:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:17.443 08:04:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:17.443 08:04:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.443 08:04:31 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:17.443 08:04:31 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:17.702 08:04:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:17.702 08:04:31 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:17.702 08:04:31 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:17.702 08:04:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:17.702 08:04:31 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:17.702 08:04:31 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:17.702 08:04:31 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:17.702 08:04:31 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:17.702 08:04:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:17.702 08:04:31 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:17.702 08:04:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:17.961 08:04:31 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:17.961 08:04:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:17.961 08:04:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:17.961 08:04:31 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:17.961 08:04:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:17.961 08:04:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:17.961 08:04:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:17.961 08:04:31 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:17.961 08:04:31 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:17.961 08:04:31 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:17.961 08:04:31 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:17.961 08:04:31 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:17.961 08:04:31 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:18.220 08:04:32 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:18.220 [2024-11-20 08:04:32.158500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:18.221 [2024-11-20 08:04:32.194765] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:18.221 [2024-11-20 08:04:32.194766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.221 [2024-11-20 08:04:32.236012] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:18.221 [2024-11-20 08:04:32.236050] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:21.510 08:04:35 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:07:21.510 08:04:35 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:07:21.510 spdk_app_start Round 2 00:07:21.510 08:04:35 event.app_repeat -- event/event.sh@25 -- # waitforlisten 1509142 /var/tmp/spdk-nbd.sock 00:07:21.510 08:04:35 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1509142 ']' 00:07:21.510 08:04:35 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:21.510 08:04:35 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.510 08:04:35 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:21.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:21.510 08:04:35 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.510 08:04:35 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:21.510 08:04:35 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.510 08:04:35 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:21.510 08:04:35 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:21.510 Malloc0 00:07:21.510 08:04:35 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:07:21.770 Malloc1 00:07:21.770 08:04:35 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:21.770 08:04:35 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.770 08:04:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:21.770 08:04:35 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:07:21.770 08:04:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.770 08:04:35 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:07:21.770 08:04:35 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:07:21.770 08:04:35 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:21.770 08:04:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:07:21.770 08:04:35 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:07:21.770 08:04:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:21.770 08:04:35 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:07:21.770 08:04:35 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:07:21.770 08:04:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:07:21.770 08:04:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:21.770 08:04:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:07:22.029 /dev/nbd0 00:07:22.029 08:04:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:07:22.029 08:04:35 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:07:22.029 08:04:35 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:07:22.029 08:04:35 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:22.029 08:04:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:22.029 08:04:35 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:22.030 08:04:35 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:07:22.030 08:04:35 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:22.030 08:04:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:22.030 08:04:35 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:22.030 08:04:35 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:22.030 1+0 records in 00:07:22.030 1+0 records out 00:07:22.030 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000221196 s, 18.5 MB/s 00:07:22.030 08:04:35 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:22.030 08:04:35 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:22.030 08:04:35 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:22.030 08:04:35 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:22.030 08:04:35 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:22.030 08:04:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.030 08:04:35 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:22.030 08:04:35 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:07:22.289 /dev/nbd1 00:07:22.289 08:04:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:07:22.289 08:04:36 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:07:22.289 08:04:36 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:07:22.289 08:04:36 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:07:22.289 08:04:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:07:22.289 08:04:36 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:07:22.289 08:04:36 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:07:22.289 08:04:36 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:07:22.289 08:04:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:07:22.289 08:04:36 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:07:22.289 08:04:36 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:07:22.289 1+0 records in 00:07:22.289 1+0 records out 00:07:22.289 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000211715 s, 19.3 MB/s 00:07:22.289 08:04:36 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:22.289 08:04:36 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:07:22.289 08:04:36 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:07:22.289 08:04:36 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:07:22.289 08:04:36 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:07:22.289 08:04:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:07:22.289 08:04:36 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:07:22.289 08:04:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:22.289 08:04:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.289 08:04:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:22.549 08:04:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:07:22.549 { 00:07:22.549 "nbd_device": "/dev/nbd0", 00:07:22.549 "bdev_name": "Malloc0" 00:07:22.549 }, 00:07:22.549 { 00:07:22.549 "nbd_device": "/dev/nbd1", 00:07:22.549 "bdev_name": "Malloc1" 00:07:22.549 } 00:07:22.549 ]' 00:07:22.549 08:04:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:07:22.549 { 00:07:22.549 "nbd_device": "/dev/nbd0", 00:07:22.549 "bdev_name": "Malloc0" 00:07:22.549 }, 00:07:22.549 { 00:07:22.549 "nbd_device": "/dev/nbd1", 00:07:22.549 "bdev_name": "Malloc1" 00:07:22.549 } 00:07:22.549 ]' 00:07:22.549 08:04:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:22.549 08:04:36 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:07:22.549 /dev/nbd1' 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:07:22.550 /dev/nbd1' 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:07:22.550 256+0 records in 00:07:22.550 256+0 records out 00:07:22.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106486 s, 98.5 MB/s 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:07:22.550 256+0 records in 00:07:22.550 256+0 records out 00:07:22.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.013706 s, 76.5 MB/s 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:07:22.550 256+0 records in 00:07:22.550 256+0 records out 00:07:22.550 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0145401 s, 72.1 MB/s 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.550 08:04:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:07:22.809 08:04:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:07:22.809 08:04:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:07:22.809 08:04:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:07:22.809 08:04:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:22.809 08:04:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:22.809 08:04:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:07:22.809 08:04:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:22.809 08:04:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:22.809 08:04:36 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:07:22.809 08:04:36 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:07:23.068 08:04:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:07:23.068 08:04:36 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:07:23.068 08:04:36 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:07:23.068 08:04:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:07:23.068 08:04:36 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:07:23.068 08:04:36 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:07:23.068 08:04:36 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:07:23.068 08:04:36 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:07:23.068 08:04:36 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:07:23.068 08:04:36 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:07:23.068 08:04:36 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:07:23.068 08:04:37 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:07:23.068 08:04:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:07:23.068 08:04:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:07:23.327 08:04:37 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:07:23.327 08:04:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:07:23.327 08:04:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:07:23.327 08:04:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:07:23.327 08:04:37 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:07:23.327 08:04:37 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:07:23.327 08:04:37 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:07:23.327 08:04:37 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:07:23.327 08:04:37 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:07:23.327 08:04:37 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:07:23.327 08:04:37 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:07:23.586 [2024-11-20 08:04:37.474719] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:23.586 [2024-11-20 08:04:37.511345] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:23.586 [2024-11-20 08:04:37.511346] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.586 [2024-11-20 08:04:37.552235] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:07:23.586 [2024-11-20 08:04:37.552286] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:07:26.875 08:04:40 event.app_repeat -- event/event.sh@38 -- # waitforlisten 1509142 /var/tmp/spdk-nbd.sock 00:07:26.875 08:04:40 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 1509142 ']' 00:07:26.875 08:04:40 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:07:26.875 08:04:40 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:26.875 08:04:40 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:07:26.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:07:26.875 08:04:40 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:26.875 08:04:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:26.875 08:04:40 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:26.875 08:04:40 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:07:26.875 08:04:40 event.app_repeat -- event/event.sh@39 -- # killprocess 1509142 00:07:26.875 08:04:40 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 1509142 ']' 00:07:26.875 08:04:40 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 1509142 00:07:26.875 08:04:40 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:07:26.875 08:04:40 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:26.875 08:04:40 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1509142 00:07:26.875 08:04:40 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:26.875 08:04:40 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:26.875 08:04:40 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1509142' 00:07:26.875 killing process with pid 1509142 00:07:26.875 08:04:40 event.app_repeat -- common/autotest_common.sh@973 -- # kill 1509142 00:07:26.875 08:04:40 event.app_repeat -- common/autotest_common.sh@978 -- # wait 1509142 00:07:26.875 spdk_app_start is called in Round 0. 00:07:26.875 Shutdown signal received, stop current app iteration 00:07:26.875 Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 reinitialization... 00:07:26.875 spdk_app_start is called in Round 1. 00:07:26.875 Shutdown signal received, stop current app iteration 00:07:26.875 Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 reinitialization... 00:07:26.875 spdk_app_start is called in Round 2. 00:07:26.875 Shutdown signal received, stop current app iteration 00:07:26.875 Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 reinitialization... 00:07:26.875 spdk_app_start is called in Round 3. 00:07:26.875 Shutdown signal received, stop current app iteration 00:07:26.875 08:04:40 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:07:26.875 08:04:40 event.app_repeat -- event/event.sh@42 -- # return 0 00:07:26.875 00:07:26.875 real 0m16.416s 00:07:26.875 user 0m36.046s 00:07:26.875 sys 0m2.585s 00:07:26.875 08:04:40 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.875 08:04:40 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:07:26.875 ************************************ 00:07:26.875 END TEST app_repeat 00:07:26.875 ************************************ 00:07:26.875 08:04:40 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:07:26.875 08:04:40 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:26.875 08:04:40 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.875 08:04:40 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.875 08:04:40 event -- common/autotest_common.sh@10 -- # set +x 00:07:26.875 ************************************ 00:07:26.875 START TEST cpu_locks 00:07:26.875 ************************************ 00:07:26.875 08:04:40 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:07:26.875 * Looking for test storage... 00:07:26.875 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:07:26.875 08:04:40 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:26.876 08:04:40 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:07:26.876 08:04:40 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:27.135 08:04:40 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.135 08:04:40 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:07:27.135 08:04:40 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.135 08:04:40 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:27.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.135 --rc genhtml_branch_coverage=1 00:07:27.135 --rc genhtml_function_coverage=1 00:07:27.135 --rc genhtml_legend=1 00:07:27.135 --rc geninfo_all_blocks=1 00:07:27.135 --rc geninfo_unexecuted_blocks=1 00:07:27.135 00:07:27.135 ' 00:07:27.135 08:04:40 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:27.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.135 --rc genhtml_branch_coverage=1 00:07:27.135 --rc genhtml_function_coverage=1 00:07:27.135 --rc genhtml_legend=1 00:07:27.135 --rc geninfo_all_blocks=1 00:07:27.135 --rc geninfo_unexecuted_blocks=1 00:07:27.135 00:07:27.135 ' 00:07:27.135 08:04:40 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:27.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.135 --rc genhtml_branch_coverage=1 00:07:27.135 --rc genhtml_function_coverage=1 00:07:27.135 --rc genhtml_legend=1 00:07:27.135 --rc geninfo_all_blocks=1 00:07:27.135 --rc geninfo_unexecuted_blocks=1 00:07:27.135 00:07:27.135 ' 00:07:27.135 08:04:40 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:27.135 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.135 --rc genhtml_branch_coverage=1 00:07:27.135 --rc genhtml_function_coverage=1 00:07:27.135 --rc genhtml_legend=1 00:07:27.135 --rc geninfo_all_blocks=1 00:07:27.135 --rc geninfo_unexecuted_blocks=1 00:07:27.135 00:07:27.135 ' 00:07:27.135 08:04:40 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:07:27.135 08:04:40 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:07:27.135 08:04:40 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:07:27.135 08:04:40 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:07:27.135 08:04:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.135 08:04:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.135 08:04:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.135 ************************************ 00:07:27.135 START TEST default_locks 00:07:27.135 ************************************ 00:07:27.135 08:04:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:07:27.135 08:04:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=1512170 00:07:27.135 08:04:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 1512170 00:07:27.135 08:04:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:27.135 08:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1512170 ']' 00:07:27.135 08:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.135 08:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:27.135 08:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.135 08:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:27.135 08:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:27.135 [2024-11-20 08:04:41.051038] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:27.135 [2024-11-20 08:04:41.051073] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512170 ] 00:07:27.135 [2024-11-20 08:04:41.123934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.395 [2024-11-20 08:04:41.163677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.395 08:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:27.395 08:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:07:27.395 08:04:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 1512170 00:07:27.395 08:04:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 1512170 00:07:27.395 08:04:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:27.964 lslocks: write error 00:07:27.964 08:04:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 1512170 00:07:27.964 08:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 1512170 ']' 00:07:27.964 08:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 1512170 00:07:27.964 08:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:07:27.964 08:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.964 08:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1512170 00:07:27.964 08:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.964 08:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.964 08:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1512170' 00:07:27.964 killing process with pid 1512170 00:07:27.964 08:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 1512170 00:07:27.964 08:04:41 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 1512170 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 1512170 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1512170 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 1512170 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 1512170 ']' 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.224 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1512170) - No such process 00:07:28.224 ERROR: process (pid: 1512170) is no longer running 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:28.224 00:07:28.224 real 0m1.076s 00:07:28.224 user 0m1.030s 00:07:28.224 sys 0m0.491s 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.224 08:04:42 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.224 ************************************ 00:07:28.224 END TEST default_locks 00:07:28.224 ************************************ 00:07:28.224 08:04:42 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:07:28.224 08:04:42 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:28.224 08:04:42 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.224 08:04:42 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:28.224 ************************************ 00:07:28.224 START TEST default_locks_via_rpc 00:07:28.224 ************************************ 00:07:28.224 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:07:28.224 08:04:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=1512344 00:07:28.224 08:04:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 1512344 00:07:28.224 08:04:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:28.224 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1512344 ']' 00:07:28.224 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:28.224 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:28.224 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:28.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:28.224 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:28.224 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.224 [2024-11-20 08:04:42.194478] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:28.224 [2024-11-20 08:04:42.194518] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512344 ] 00:07:28.483 [2024-11-20 08:04:42.270172] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:28.483 [2024-11-20 08:04:42.311938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:28.743 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:28.743 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:28.743 08:04:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:07:28.743 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.743 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.743 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.743 08:04:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:07:28.743 08:04:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:07:28.743 08:04:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:07:28.743 08:04:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:07:28.743 08:04:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:07:28.743 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:28.743 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:28.743 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:28.743 08:04:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 1512344 00:07:28.743 08:04:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 1512344 00:07:28.743 08:04:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:29.002 08:04:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 1512344 00:07:29.002 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 1512344 ']' 00:07:29.002 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 1512344 00:07:29.002 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:07:29.002 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:29.002 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1512344 00:07:29.002 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:29.002 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:29.002 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1512344' 00:07:29.002 killing process with pid 1512344 00:07:29.002 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 1512344 00:07:29.002 08:04:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 1512344 00:07:29.262 00:07:29.262 real 0m1.044s 00:07:29.262 user 0m1.000s 00:07:29.262 sys 0m0.480s 00:07:29.262 08:04:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.262 08:04:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:29.262 ************************************ 00:07:29.262 END TEST default_locks_via_rpc 00:07:29.262 ************************************ 00:07:29.262 08:04:43 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:07:29.262 08:04:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:29.262 08:04:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.262 08:04:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:29.262 ************************************ 00:07:29.262 START TEST non_locking_app_on_locked_coremask 00:07:29.262 ************************************ 00:07:29.262 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:07:29.262 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=1512471 00:07:29.262 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 1512471 /var/tmp/spdk.sock 00:07:29.262 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:29.262 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1512471 ']' 00:07:29.262 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:29.262 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.262 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:29.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:29.262 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.262 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.520 [2024-11-20 08:04:43.304669] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:29.520 [2024-11-20 08:04:43.304711] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512471 ] 00:07:29.520 [2024-11-20 08:04:43.379676] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:29.520 [2024-11-20 08:04:43.421468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:29.779 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:29.779 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:29.779 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=1512667 00:07:29.779 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 1512667 /var/tmp/spdk2.sock 00:07:29.779 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:07:29.779 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1512667 ']' 00:07:29.779 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:29.779 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:29.779 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:29.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:29.779 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:29.779 08:04:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:29.779 [2024-11-20 08:04:43.681475] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:29.779 [2024-11-20 08:04:43.681523] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512667 ] 00:07:29.779 [2024-11-20 08:04:43.765629] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:29.779 [2024-11-20 08:04:43.765654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.037 [2024-11-20 08:04:43.857853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:30.604 08:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:30.604 08:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:30.604 08:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 1512471 00:07:30.604 08:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1512471 00:07:30.604 08:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:31.174 lslocks: write error 00:07:31.174 08:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 1512471 00:07:31.174 08:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1512471 ']' 00:07:31.174 08:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1512471 00:07:31.174 08:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:31.174 08:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.174 08:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1512471 00:07:31.174 08:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.174 08:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.174 08:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1512471' 00:07:31.174 killing process with pid 1512471 00:07:31.174 08:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1512471 00:07:31.174 08:04:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1512471 00:07:31.742 08:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 1512667 00:07:31.742 08:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1512667 ']' 00:07:31.742 08:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1512667 00:07:31.742 08:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:31.742 08:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:31.742 08:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1512667 00:07:31.742 08:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:31.742 08:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:31.742 08:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1512667' 00:07:31.742 killing process with pid 1512667 00:07:31.742 08:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1512667 00:07:31.742 08:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1512667 00:07:32.002 00:07:32.002 real 0m2.680s 00:07:32.002 user 0m2.822s 00:07:32.002 sys 0m0.881s 00:07:32.002 08:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.002 08:04:45 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.002 ************************************ 00:07:32.002 END TEST non_locking_app_on_locked_coremask 00:07:32.002 ************************************ 00:07:32.002 08:04:45 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:07:32.002 08:04:45 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:32.002 08:04:45 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.002 08:04:45 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:32.002 ************************************ 00:07:32.002 START TEST locking_app_on_unlocked_coremask 00:07:32.002 ************************************ 00:07:32.002 08:04:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:07:32.002 08:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=1512971 00:07:32.002 08:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 1512971 /var/tmp/spdk.sock 00:07:32.002 08:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:07:32.002 08:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1512971 ']' 00:07:32.002 08:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:32.002 08:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.002 08:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:32.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:32.002 08:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.002 08:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.262 [2024-11-20 08:04:46.054111] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:32.262 [2024-11-20 08:04:46.054150] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1512971 ] 00:07:32.262 [2024-11-20 08:04:46.130105] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:32.262 [2024-11-20 08:04:46.130131] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.262 [2024-11-20 08:04:46.172039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:32.522 08:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:32.522 08:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:32.522 08:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=1513165 00:07:32.522 08:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 1513165 /var/tmp/spdk2.sock 00:07:32.522 08:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:32.522 08:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1513165 ']' 00:07:32.522 08:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:32.522 08:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:32.522 08:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:32.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:32.522 08:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:32.522 08:04:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:32.522 [2024-11-20 08:04:46.446853] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:32.522 [2024-11-20 08:04:46.446901] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513165 ] 00:07:32.522 [2024-11-20 08:04:46.533526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:32.782 [2024-11-20 08:04:46.621797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.350 08:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.350 08:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:33.350 08:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 1513165 00:07:33.350 08:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1513165 00:07:33.350 08:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:33.920 lslocks: write error 00:07:33.920 08:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 1512971 00:07:33.920 08:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1512971 ']' 00:07:33.920 08:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1512971 00:07:33.920 08:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:33.920 08:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:33.920 08:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1512971 00:07:33.920 08:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:33.920 08:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:33.920 08:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1512971' 00:07:33.920 killing process with pid 1512971 00:07:33.920 08:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1512971 00:07:33.920 08:04:47 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1512971 00:07:34.500 08:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 1513165 00:07:34.500 08:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1513165 ']' 00:07:34.500 08:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 1513165 00:07:34.500 08:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:34.500 08:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:34.500 08:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1513165 00:07:34.500 08:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:34.500 08:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:34.500 08:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1513165' 00:07:34.500 killing process with pid 1513165 00:07:34.500 08:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 1513165 00:07:34.500 08:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 1513165 00:07:34.760 00:07:34.760 real 0m2.700s 00:07:34.760 user 0m2.838s 00:07:34.760 sys 0m0.888s 00:07:34.760 08:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:34.760 08:04:48 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:34.760 ************************************ 00:07:34.760 END TEST locking_app_on_unlocked_coremask 00:07:34.760 ************************************ 00:07:34.760 08:04:48 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:07:34.760 08:04:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:34.760 08:04:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:34.760 08:04:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:34.760 ************************************ 00:07:34.760 START TEST locking_app_on_locked_coremask 00:07:34.760 ************************************ 00:07:34.760 08:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:07:34.760 08:04:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=1513466 00:07:34.760 08:04:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 1513466 /var/tmp/spdk.sock 00:07:34.760 08:04:48 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:07:34.760 08:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1513466 ']' 00:07:34.760 08:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.760 08:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.760 08:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.760 08:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.760 08:04:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.020 [2024-11-20 08:04:48.823629] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:35.020 [2024-11-20 08:04:48.823672] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513466 ] 00:07:35.020 [2024-11-20 08:04:48.899914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:35.020 [2024-11-20 08:04:48.941667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:35.280 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.280 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:35.280 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=1513657 00:07:35.280 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 1513657 /var/tmp/spdk2.sock 00:07:35.280 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:07:35.280 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:35.280 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1513657 /var/tmp/spdk2.sock 00:07:35.280 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:35.280 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.280 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:35.280 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:35.280 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1513657 /var/tmp/spdk2.sock 00:07:35.280 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 1513657 ']' 00:07:35.280 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:35.280 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:35.280 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:35.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:35.280 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:35.280 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:35.280 [2024-11-20 08:04:49.197763] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:35.280 [2024-11-20 08:04:49.197812] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513657 ] 00:07:35.280 [2024-11-20 08:04:49.284043] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 1513466 has claimed it. 00:07:35.280 [2024-11-20 08:04:49.284076] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:35.848 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1513657) - No such process 00:07:35.848 ERROR: process (pid: 1513657) is no longer running 00:07:35.848 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:35.848 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:35.848 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:35.848 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:35.848 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:35.848 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:35.848 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 1513466 00:07:35.848 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 1513466 00:07:35.848 08:04:49 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:07:36.416 lslocks: write error 00:07:36.416 08:04:50 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 1513466 00:07:36.416 08:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 1513466 ']' 00:07:36.416 08:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 1513466 00:07:36.416 08:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:07:36.416 08:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:36.416 08:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1513466 00:07:36.417 08:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:36.417 08:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:36.417 08:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1513466' 00:07:36.417 killing process with pid 1513466 00:07:36.417 08:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 1513466 00:07:36.417 08:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 1513466 00:07:36.676 00:07:36.676 real 0m1.867s 00:07:36.676 user 0m1.995s 00:07:36.676 sys 0m0.609s 00:07:36.676 08:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:36.676 08:04:50 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.676 ************************************ 00:07:36.676 END TEST locking_app_on_locked_coremask 00:07:36.676 ************************************ 00:07:36.676 08:04:50 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:07:36.676 08:04:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.676 08:04:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.676 08:04:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:36.936 ************************************ 00:07:36.936 START TEST locking_overlapped_coremask 00:07:36.936 ************************************ 00:07:36.936 08:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:07:36.936 08:04:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=1513946 00:07:36.936 08:04:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 1513946 /var/tmp/spdk.sock 00:07:36.936 08:04:50 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:07:36.936 08:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1513946 ']' 00:07:36.936 08:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.936 08:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.936 08:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.936 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.936 08:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.936 08:04:50 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:36.936 [2024-11-20 08:04:50.761106] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:36.936 [2024-11-20 08:04:50.761145] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513946 ] 00:07:36.936 [2024-11-20 08:04:50.833289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:36.936 [2024-11-20 08:04:50.876938] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:36.936 [2024-11-20 08:04:50.877044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.936 [2024-11-20 08:04:50.877045] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:37.195 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:37.195 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:07:37.195 08:04:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=1513960 00:07:37.195 08:04:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 1513960 /var/tmp/spdk2.sock 00:07:37.195 08:04:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:07:37.195 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:07:37.195 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 1513960 /var/tmp/spdk2.sock 00:07:37.195 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:07:37.195 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.195 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:07:37.195 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.195 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 1513960 /var/tmp/spdk2.sock 00:07:37.195 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 1513960 ']' 00:07:37.195 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:37.195 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:37.195 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:37.195 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:37.195 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:37.195 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:37.195 [2024-11-20 08:04:51.153223] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:37.195 [2024-11-20 08:04:51.153271] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1513960 ] 00:07:37.454 [2024-11-20 08:04:51.244796] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1513946 has claimed it. 00:07:37.454 [2024-11-20 08:04:51.244835] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:07:38.023 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (1513960) - No such process 00:07:38.023 ERROR: process (pid: 1513960) is no longer running 00:07:38.023 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.024 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:07:38.024 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:07:38.024 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:38.024 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:38.024 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:38.024 08:04:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:07:38.024 08:04:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:38.024 08:04:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:38.024 08:04:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:38.024 08:04:51 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 1513946 00:07:38.024 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 1513946 ']' 00:07:38.024 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 1513946 00:07:38.024 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:07:38.024 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:38.024 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1513946 00:07:38.024 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:38.024 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:38.024 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1513946' 00:07:38.024 killing process with pid 1513946 00:07:38.024 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 1513946 00:07:38.024 08:04:51 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 1513946 00:07:38.283 00:07:38.283 real 0m1.446s 00:07:38.283 user 0m3.995s 00:07:38.283 sys 0m0.385s 00:07:38.283 08:04:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:38.283 08:04:52 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:07:38.283 ************************************ 00:07:38.283 END TEST locking_overlapped_coremask 00:07:38.283 ************************************ 00:07:38.283 08:04:52 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:07:38.283 08:04:52 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:38.283 08:04:52 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:38.283 08:04:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:38.283 ************************************ 00:07:38.283 START TEST locking_overlapped_coremask_via_rpc 00:07:38.283 ************************************ 00:07:38.283 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:07:38.283 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=1514215 00:07:38.283 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 1514215 /var/tmp/spdk.sock 00:07:38.283 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:07:38.283 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1514215 ']' 00:07:38.283 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:38.284 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.284 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:38.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:38.284 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.284 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.284 [2024-11-20 08:04:52.275693] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:38.284 [2024-11-20 08:04:52.275738] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1514215 ] 00:07:38.543 [2024-11-20 08:04:52.347727] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:38.543 [2024-11-20 08:04:52.347749] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:38.543 [2024-11-20 08:04:52.391861] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.543 [2024-11-20 08:04:52.391969] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:38.543 [2024-11-20 08:04:52.391970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:38.803 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.803 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:38.803 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=1514228 00:07:38.803 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 1514228 /var/tmp/spdk2.sock 00:07:38.803 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:07:38.803 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1514228 ']' 00:07:38.803 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:38.803 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.803 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:38.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:38.803 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.803 08:04:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:38.803 [2024-11-20 08:04:52.665113] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:38.803 [2024-11-20 08:04:52.665164] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1514228 ] 00:07:38.803 [2024-11-20 08:04:52.755365] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:07:38.803 [2024-11-20 08:04:52.755391] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:39.062 [2024-11-20 08:04:52.842625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:39.062 [2024-11-20 08:04:52.842742] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:39.062 [2024-11-20 08:04:52.842743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.631 [2024-11-20 08:04:53.504280] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 1514215 has claimed it. 00:07:39.631 request: 00:07:39.631 { 00:07:39.631 "method": "framework_enable_cpumask_locks", 00:07:39.631 "req_id": 1 00:07:39.631 } 00:07:39.631 Got JSON-RPC error response 00:07:39.631 response: 00:07:39.631 { 00:07:39.631 "code": -32603, 00:07:39.631 "message": "Failed to claim CPU core: 2" 00:07:39.631 } 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 1514215 /var/tmp/spdk.sock 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1514215 ']' 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:39.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.631 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:39.891 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:39.891 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:39.891 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 1514228 /var/tmp/spdk2.sock 00:07:39.891 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 1514228 ']' 00:07:39.891 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:07:39.891 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:39.891 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:07:39.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:07:39.891 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:39.891 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.151 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:40.151 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:07:40.151 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:07:40.151 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:07:40.151 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:07:40.151 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:07:40.151 00:07:40.151 real 0m1.699s 00:07:40.151 user 0m0.802s 00:07:40.151 sys 0m0.139s 00:07:40.151 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.151 08:04:53 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:40.151 ************************************ 00:07:40.151 END TEST locking_overlapped_coremask_via_rpc 00:07:40.151 ************************************ 00:07:40.151 08:04:53 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:07:40.151 08:04:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1514215 ]] 00:07:40.151 08:04:53 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1514215 00:07:40.151 08:04:53 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1514215 ']' 00:07:40.151 08:04:53 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1514215 00:07:40.151 08:04:53 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:40.151 08:04:53 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.151 08:04:53 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1514215 00:07:40.151 08:04:54 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:40.151 08:04:54 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:40.151 08:04:54 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1514215' 00:07:40.151 killing process with pid 1514215 00:07:40.151 08:04:54 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1514215 00:07:40.151 08:04:54 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1514215 00:07:40.410 08:04:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1514228 ]] 00:07:40.410 08:04:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1514228 00:07:40.410 08:04:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1514228 ']' 00:07:40.410 08:04:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1514228 00:07:40.410 08:04:54 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:07:40.410 08:04:54 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:40.410 08:04:54 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1514228 00:07:40.410 08:04:54 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:07:40.410 08:04:54 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:07:40.410 08:04:54 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1514228' 00:07:40.410 killing process with pid 1514228 00:07:40.410 08:04:54 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 1514228 00:07:40.410 08:04:54 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 1514228 00:07:40.671 08:04:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:40.671 08:04:54 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:07:40.671 08:04:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 1514215 ]] 00:07:40.671 08:04:54 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 1514215 00:07:40.671 08:04:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1514215 ']' 00:07:40.671 08:04:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1514215 00:07:40.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1514215) - No such process 00:07:40.671 08:04:54 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1514215 is not found' 00:07:40.671 Process with pid 1514215 is not found 00:07:40.671 08:04:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 1514228 ]] 00:07:40.671 08:04:54 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 1514228 00:07:40.671 08:04:54 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 1514228 ']' 00:07:40.671 08:04:54 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 1514228 00:07:40.671 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1514228) - No such process 00:07:40.671 08:04:54 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 1514228 is not found' 00:07:40.671 Process with pid 1514228 is not found 00:07:40.671 08:04:54 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:07:40.671 00:07:40.671 real 0m13.879s 00:07:40.671 user 0m24.152s 00:07:40.671 sys 0m4.797s 00:07:40.671 08:04:54 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.671 08:04:54 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:07:40.671 ************************************ 00:07:40.671 END TEST cpu_locks 00:07:40.671 ************************************ 00:07:40.930 00:07:40.930 real 0m38.596s 00:07:40.930 user 1m13.463s 00:07:40.930 sys 0m8.343s 00:07:40.930 08:04:54 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:40.930 08:04:54 event -- common/autotest_common.sh@10 -- # set +x 00:07:40.930 ************************************ 00:07:40.930 END TEST event 00:07:40.930 ************************************ 00:07:40.930 08:04:54 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:40.930 08:04:54 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:40.930 08:04:54 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:40.930 08:04:54 -- common/autotest_common.sh@10 -- # set +x 00:07:40.930 ************************************ 00:07:40.930 START TEST thread 00:07:40.930 ************************************ 00:07:40.930 08:04:54 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:07:40.930 * Looking for test storage... 00:07:40.930 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:07:40.930 08:04:54 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:40.930 08:04:54 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:07:40.930 08:04:54 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:40.930 08:04:54 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:40.930 08:04:54 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:40.931 08:04:54 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:40.931 08:04:54 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:40.931 08:04:54 thread -- scripts/common.sh@336 -- # IFS=.-: 00:07:40.931 08:04:54 thread -- scripts/common.sh@336 -- # read -ra ver1 00:07:40.931 08:04:54 thread -- scripts/common.sh@337 -- # IFS=.-: 00:07:40.931 08:04:54 thread -- scripts/common.sh@337 -- # read -ra ver2 00:07:40.931 08:04:54 thread -- scripts/common.sh@338 -- # local 'op=<' 00:07:40.931 08:04:54 thread -- scripts/common.sh@340 -- # ver1_l=2 00:07:40.931 08:04:54 thread -- scripts/common.sh@341 -- # ver2_l=1 00:07:40.931 08:04:54 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:40.931 08:04:54 thread -- scripts/common.sh@344 -- # case "$op" in 00:07:40.931 08:04:54 thread -- scripts/common.sh@345 -- # : 1 00:07:40.931 08:04:54 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:40.931 08:04:54 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:40.931 08:04:54 thread -- scripts/common.sh@365 -- # decimal 1 00:07:40.931 08:04:54 thread -- scripts/common.sh@353 -- # local d=1 00:07:40.931 08:04:54 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:40.931 08:04:54 thread -- scripts/common.sh@355 -- # echo 1 00:07:40.931 08:04:54 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:07:40.931 08:04:54 thread -- scripts/common.sh@366 -- # decimal 2 00:07:40.931 08:04:54 thread -- scripts/common.sh@353 -- # local d=2 00:07:40.931 08:04:54 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:40.931 08:04:54 thread -- scripts/common.sh@355 -- # echo 2 00:07:40.931 08:04:54 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:07:41.191 08:04:54 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:41.191 08:04:54 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:41.191 08:04:54 thread -- scripts/common.sh@368 -- # return 0 00:07:41.191 08:04:54 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:41.191 08:04:54 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:41.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.191 --rc genhtml_branch_coverage=1 00:07:41.191 --rc genhtml_function_coverage=1 00:07:41.191 --rc genhtml_legend=1 00:07:41.191 --rc geninfo_all_blocks=1 00:07:41.191 --rc geninfo_unexecuted_blocks=1 00:07:41.191 00:07:41.191 ' 00:07:41.191 08:04:54 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:41.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.191 --rc genhtml_branch_coverage=1 00:07:41.191 --rc genhtml_function_coverage=1 00:07:41.191 --rc genhtml_legend=1 00:07:41.191 --rc geninfo_all_blocks=1 00:07:41.191 --rc geninfo_unexecuted_blocks=1 00:07:41.191 00:07:41.191 ' 00:07:41.191 08:04:54 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:41.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.191 --rc genhtml_branch_coverage=1 00:07:41.191 --rc genhtml_function_coverage=1 00:07:41.191 --rc genhtml_legend=1 00:07:41.191 --rc geninfo_all_blocks=1 00:07:41.191 --rc geninfo_unexecuted_blocks=1 00:07:41.191 00:07:41.191 ' 00:07:41.191 08:04:54 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:41.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:41.191 --rc genhtml_branch_coverage=1 00:07:41.191 --rc genhtml_function_coverage=1 00:07:41.191 --rc genhtml_legend=1 00:07:41.191 --rc geninfo_all_blocks=1 00:07:41.191 --rc geninfo_unexecuted_blocks=1 00:07:41.191 00:07:41.191 ' 00:07:41.191 08:04:54 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:41.191 08:04:54 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:41.191 08:04:54 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:41.191 08:04:54 thread -- common/autotest_common.sh@10 -- # set +x 00:07:41.191 ************************************ 00:07:41.191 START TEST thread_poller_perf 00:07:41.191 ************************************ 00:07:41.191 08:04:54 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:07:41.191 [2024-11-20 08:04:55.007733] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:41.191 [2024-11-20 08:04:55.007803] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1514787 ] 00:07:41.191 [2024-11-20 08:04:55.084477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:41.191 [2024-11-20 08:04:55.124114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.191 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:07:42.571 [2024-11-20T07:04:56.599Z] ====================================== 00:07:42.571 [2024-11-20T07:04:56.599Z] busy:2105168444 (cyc) 00:07:42.571 [2024-11-20T07:04:56.600Z] total_run_count: 404000 00:07:42.572 [2024-11-20T07:04:56.600Z] tsc_hz: 2100000000 (cyc) 00:07:42.572 [2024-11-20T07:04:56.600Z] ====================================== 00:07:42.572 [2024-11-20T07:04:56.600Z] poller_cost: 5210 (cyc), 2480 (nsec) 00:07:42.572 00:07:42.572 real 0m1.184s 00:07:42.572 user 0m1.109s 00:07:42.572 sys 0m0.072s 00:07:42.572 08:04:56 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:42.572 08:04:56 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:42.572 ************************************ 00:07:42.572 END TEST thread_poller_perf 00:07:42.572 ************************************ 00:07:42.572 08:04:56 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:42.572 08:04:56 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:07:42.572 08:04:56 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:42.572 08:04:56 thread -- common/autotest_common.sh@10 -- # set +x 00:07:42.572 ************************************ 00:07:42.572 START TEST thread_poller_perf 00:07:42.572 ************************************ 00:07:42.572 08:04:56 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:07:42.572 [2024-11-20 08:04:56.262287] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:42.572 [2024-11-20 08:04:56.262356] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1515040 ] 00:07:42.572 [2024-11-20 08:04:56.338782] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:42.572 [2024-11-20 08:04:56.377866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:42.572 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:07:43.510 [2024-11-20T07:04:57.539Z] ====================================== 00:07:43.511 [2024-11-20T07:04:57.539Z] busy:2101470450 (cyc) 00:07:43.511 [2024-11-20T07:04:57.539Z] total_run_count: 5429000 00:07:43.511 [2024-11-20T07:04:57.539Z] tsc_hz: 2100000000 (cyc) 00:07:43.511 [2024-11-20T07:04:57.539Z] ====================================== 00:07:43.511 [2024-11-20T07:04:57.539Z] poller_cost: 387 (cyc), 184 (nsec) 00:07:43.511 00:07:43.511 real 0m1.180s 00:07:43.511 user 0m1.098s 00:07:43.511 sys 0m0.079s 00:07:43.511 08:04:57 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.511 08:04:57 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:07:43.511 ************************************ 00:07:43.511 END TEST thread_poller_perf 00:07:43.511 ************************************ 00:07:43.511 08:04:57 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:07:43.511 00:07:43.511 real 0m2.670s 00:07:43.511 user 0m2.351s 00:07:43.511 sys 0m0.335s 00:07:43.511 08:04:57 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.511 08:04:57 thread -- common/autotest_common.sh@10 -- # set +x 00:07:43.511 ************************************ 00:07:43.511 END TEST thread 00:07:43.511 ************************************ 00:07:43.511 08:04:57 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:07:43.511 08:04:57 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:43.511 08:04:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:43.511 08:04:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.511 08:04:57 -- common/autotest_common.sh@10 -- # set +x 00:07:43.511 ************************************ 00:07:43.511 START TEST app_cmdline 00:07:43.511 ************************************ 00:07:43.511 08:04:57 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:43.771 * Looking for test storage... 00:07:43.771 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:43.771 08:04:57 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:43.771 08:04:57 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:07:43.771 08:04:57 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:43.771 08:04:57 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:43.771 08:04:57 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:43.771 08:04:57 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:43.771 08:04:57 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:43.771 08:04:57 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:07:43.771 08:04:57 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:07:43.771 08:04:57 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:07:43.771 08:04:57 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:07:43.771 08:04:57 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:07:43.771 08:04:57 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:07:43.771 08:04:57 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:07:43.771 08:04:57 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:43.771 08:04:57 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:07:43.771 08:04:57 app_cmdline -- scripts/common.sh@345 -- # : 1 00:07:43.771 08:04:57 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:43.771 08:04:57 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:43.771 08:04:57 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:07:43.771 08:04:57 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:07:43.771 08:04:57 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:43.772 08:04:57 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:07:43.772 08:04:57 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:07:43.772 08:04:57 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:07:43.772 08:04:57 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:07:43.772 08:04:57 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:43.772 08:04:57 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:07:43.772 08:04:57 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:07:43.772 08:04:57 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:43.772 08:04:57 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:43.772 08:04:57 app_cmdline -- scripts/common.sh@368 -- # return 0 00:07:43.772 08:04:57 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:43.772 08:04:57 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:43.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.772 --rc genhtml_branch_coverage=1 00:07:43.772 --rc genhtml_function_coverage=1 00:07:43.772 --rc genhtml_legend=1 00:07:43.772 --rc geninfo_all_blocks=1 00:07:43.772 --rc geninfo_unexecuted_blocks=1 00:07:43.772 00:07:43.772 ' 00:07:43.772 08:04:57 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:43.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.772 --rc genhtml_branch_coverage=1 00:07:43.772 --rc genhtml_function_coverage=1 00:07:43.772 --rc genhtml_legend=1 00:07:43.772 --rc geninfo_all_blocks=1 00:07:43.772 --rc geninfo_unexecuted_blocks=1 00:07:43.772 00:07:43.772 ' 00:07:43.772 08:04:57 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:43.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.772 --rc genhtml_branch_coverage=1 00:07:43.772 --rc genhtml_function_coverage=1 00:07:43.772 --rc genhtml_legend=1 00:07:43.772 --rc geninfo_all_blocks=1 00:07:43.772 --rc geninfo_unexecuted_blocks=1 00:07:43.772 00:07:43.772 ' 00:07:43.772 08:04:57 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:43.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:43.772 --rc genhtml_branch_coverage=1 00:07:43.772 --rc genhtml_function_coverage=1 00:07:43.772 --rc genhtml_legend=1 00:07:43.772 --rc geninfo_all_blocks=1 00:07:43.772 --rc geninfo_unexecuted_blocks=1 00:07:43.772 00:07:43.772 ' 00:07:43.772 08:04:57 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:43.772 08:04:57 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=1515335 00:07:43.772 08:04:57 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:43.772 08:04:57 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 1515335 00:07:43.772 08:04:57 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 1515335 ']' 00:07:43.772 08:04:57 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.772 08:04:57 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:43.772 08:04:57 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.772 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.772 08:04:57 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:43.772 08:04:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:43.772 [2024-11-20 08:04:57.752013] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:43.772 [2024-11-20 08:04:57.752057] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1515335 ] 00:07:44.042 [2024-11-20 08:04:57.828079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:44.042 [2024-11-20 08:04:57.869745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:44.675 08:04:58 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:44.675 08:04:58 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:07:44.675 08:04:58 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:44.962 { 00:07:44.962 "version": "SPDK v25.01-pre git sha1 6f7b42a3a", 00:07:44.962 "fields": { 00:07:44.962 "major": 25, 00:07:44.962 "minor": 1, 00:07:44.962 "patch": 0, 00:07:44.962 "suffix": "-pre", 00:07:44.962 "commit": "6f7b42a3a" 00:07:44.962 } 00:07:44.962 } 00:07:44.962 08:04:58 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:44.962 08:04:58 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:44.962 08:04:58 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:44.962 08:04:58 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:44.962 08:04:58 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:44.962 08:04:58 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:44.962 08:04:58 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:44.962 08:04:58 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:44.962 08:04:58 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:44.962 08:04:58 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:44.962 request: 00:07:44.962 { 00:07:44.962 "method": "env_dpdk_get_mem_stats", 00:07:44.962 "req_id": 1 00:07:44.962 } 00:07:44.962 Got JSON-RPC error response 00:07:44.962 response: 00:07:44.962 { 00:07:44.962 "code": -32601, 00:07:44.962 "message": "Method not found" 00:07:44.962 } 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:44.962 08:04:58 app_cmdline -- app/cmdline.sh@1 -- # killprocess 1515335 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 1515335 ']' 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 1515335 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:07:44.962 08:04:58 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:45.252 08:04:58 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1515335 00:07:45.252 08:04:59 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:45.252 08:04:59 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:45.253 08:04:59 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1515335' 00:07:45.253 killing process with pid 1515335 00:07:45.253 08:04:59 app_cmdline -- common/autotest_common.sh@973 -- # kill 1515335 00:07:45.253 08:04:59 app_cmdline -- common/autotest_common.sh@978 -- # wait 1515335 00:07:45.512 00:07:45.512 real 0m1.791s 00:07:45.512 user 0m2.134s 00:07:45.512 sys 0m0.471s 00:07:45.512 08:04:59 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.512 08:04:59 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:45.512 ************************************ 00:07:45.512 END TEST app_cmdline 00:07:45.512 ************************************ 00:07:45.512 08:04:59 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:45.512 08:04:59 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:45.512 08:04:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.512 08:04:59 -- common/autotest_common.sh@10 -- # set +x 00:07:45.512 ************************************ 00:07:45.512 START TEST version 00:07:45.512 ************************************ 00:07:45.512 08:04:59 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:45.512 * Looking for test storage... 00:07:45.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:45.512 08:04:59 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:45.512 08:04:59 version -- common/autotest_common.sh@1693 -- # lcov --version 00:07:45.512 08:04:59 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:45.771 08:04:59 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:45.771 08:04:59 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:45.771 08:04:59 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:45.772 08:04:59 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:45.772 08:04:59 version -- scripts/common.sh@336 -- # IFS=.-: 00:07:45.772 08:04:59 version -- scripts/common.sh@336 -- # read -ra ver1 00:07:45.772 08:04:59 version -- scripts/common.sh@337 -- # IFS=.-: 00:07:45.772 08:04:59 version -- scripts/common.sh@337 -- # read -ra ver2 00:07:45.772 08:04:59 version -- scripts/common.sh@338 -- # local 'op=<' 00:07:45.772 08:04:59 version -- scripts/common.sh@340 -- # ver1_l=2 00:07:45.772 08:04:59 version -- scripts/common.sh@341 -- # ver2_l=1 00:07:45.772 08:04:59 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:45.772 08:04:59 version -- scripts/common.sh@344 -- # case "$op" in 00:07:45.772 08:04:59 version -- scripts/common.sh@345 -- # : 1 00:07:45.772 08:04:59 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:45.772 08:04:59 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:45.772 08:04:59 version -- scripts/common.sh@365 -- # decimal 1 00:07:45.772 08:04:59 version -- scripts/common.sh@353 -- # local d=1 00:07:45.772 08:04:59 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:45.772 08:04:59 version -- scripts/common.sh@355 -- # echo 1 00:07:45.772 08:04:59 version -- scripts/common.sh@365 -- # ver1[v]=1 00:07:45.772 08:04:59 version -- scripts/common.sh@366 -- # decimal 2 00:07:45.772 08:04:59 version -- scripts/common.sh@353 -- # local d=2 00:07:45.772 08:04:59 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:45.772 08:04:59 version -- scripts/common.sh@355 -- # echo 2 00:07:45.772 08:04:59 version -- scripts/common.sh@366 -- # ver2[v]=2 00:07:45.772 08:04:59 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:45.772 08:04:59 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:45.772 08:04:59 version -- scripts/common.sh@368 -- # return 0 00:07:45.772 08:04:59 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:45.772 08:04:59 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:45.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.772 --rc genhtml_branch_coverage=1 00:07:45.772 --rc genhtml_function_coverage=1 00:07:45.772 --rc genhtml_legend=1 00:07:45.772 --rc geninfo_all_blocks=1 00:07:45.772 --rc geninfo_unexecuted_blocks=1 00:07:45.772 00:07:45.772 ' 00:07:45.772 08:04:59 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:45.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.772 --rc genhtml_branch_coverage=1 00:07:45.772 --rc genhtml_function_coverage=1 00:07:45.772 --rc genhtml_legend=1 00:07:45.772 --rc geninfo_all_blocks=1 00:07:45.772 --rc geninfo_unexecuted_blocks=1 00:07:45.772 00:07:45.772 ' 00:07:45.772 08:04:59 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:45.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.772 --rc genhtml_branch_coverage=1 00:07:45.772 --rc genhtml_function_coverage=1 00:07:45.772 --rc genhtml_legend=1 00:07:45.772 --rc geninfo_all_blocks=1 00:07:45.772 --rc geninfo_unexecuted_blocks=1 00:07:45.772 00:07:45.772 ' 00:07:45.772 08:04:59 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:45.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:45.772 --rc genhtml_branch_coverage=1 00:07:45.772 --rc genhtml_function_coverage=1 00:07:45.772 --rc genhtml_legend=1 00:07:45.772 --rc geninfo_all_blocks=1 00:07:45.772 --rc geninfo_unexecuted_blocks=1 00:07:45.772 00:07:45.772 ' 00:07:45.772 08:04:59 version -- app/version.sh@17 -- # get_header_version major 00:07:45.772 08:04:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:45.772 08:04:59 version -- app/version.sh@14 -- # cut -f2 00:07:45.772 08:04:59 version -- app/version.sh@14 -- # tr -d '"' 00:07:45.772 08:04:59 version -- app/version.sh@17 -- # major=25 00:07:45.772 08:04:59 version -- app/version.sh@18 -- # get_header_version minor 00:07:45.772 08:04:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:45.772 08:04:59 version -- app/version.sh@14 -- # cut -f2 00:07:45.772 08:04:59 version -- app/version.sh@14 -- # tr -d '"' 00:07:45.772 08:04:59 version -- app/version.sh@18 -- # minor=1 00:07:45.772 08:04:59 version -- app/version.sh@19 -- # get_header_version patch 00:07:45.772 08:04:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:45.772 08:04:59 version -- app/version.sh@14 -- # cut -f2 00:07:45.772 08:04:59 version -- app/version.sh@14 -- # tr -d '"' 00:07:45.772 08:04:59 version -- app/version.sh@19 -- # patch=0 00:07:45.772 08:04:59 version -- app/version.sh@20 -- # get_header_version suffix 00:07:45.772 08:04:59 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:45.772 08:04:59 version -- app/version.sh@14 -- # cut -f2 00:07:45.772 08:04:59 version -- app/version.sh@14 -- # tr -d '"' 00:07:45.772 08:04:59 version -- app/version.sh@20 -- # suffix=-pre 00:07:45.772 08:04:59 version -- app/version.sh@22 -- # version=25.1 00:07:45.772 08:04:59 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:45.772 08:04:59 version -- app/version.sh@28 -- # version=25.1rc0 00:07:45.772 08:04:59 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:45.772 08:04:59 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:45.772 08:04:59 version -- app/version.sh@30 -- # py_version=25.1rc0 00:07:45.772 08:04:59 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:07:45.772 00:07:45.772 real 0m0.235s 00:07:45.772 user 0m0.159s 00:07:45.772 sys 0m0.118s 00:07:45.772 08:04:59 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:45.772 08:04:59 version -- common/autotest_common.sh@10 -- # set +x 00:07:45.772 ************************************ 00:07:45.772 END TEST version 00:07:45.772 ************************************ 00:07:45.772 08:04:59 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:07:45.772 08:04:59 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:07:45.772 08:04:59 -- spdk/autotest.sh@194 -- # uname -s 00:07:45.772 08:04:59 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:07:45.772 08:04:59 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:45.772 08:04:59 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:07:45.772 08:04:59 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:07:45.772 08:04:59 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:45.772 08:04:59 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:45.772 08:04:59 -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:45.772 08:04:59 -- common/autotest_common.sh@10 -- # set +x 00:07:45.772 08:04:59 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:45.772 08:04:59 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:07:45.772 08:04:59 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:07:45.772 08:04:59 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:07:45.772 08:04:59 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:07:45.772 08:04:59 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:07:45.772 08:04:59 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:45.772 08:04:59 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:45.772 08:04:59 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:45.772 08:04:59 -- common/autotest_common.sh@10 -- # set +x 00:07:45.772 ************************************ 00:07:45.772 START TEST nvmf_tcp 00:07:45.772 ************************************ 00:07:45.772 08:04:59 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:46.032 * Looking for test storage... 00:07:46.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:46.032 08:04:59 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:46.032 08:04:59 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:07:46.032 08:04:59 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:46.032 08:04:59 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.032 08:04:59 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:07:46.032 08:04:59 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.032 08:04:59 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:46.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.032 --rc genhtml_branch_coverage=1 00:07:46.032 --rc genhtml_function_coverage=1 00:07:46.032 --rc genhtml_legend=1 00:07:46.032 --rc geninfo_all_blocks=1 00:07:46.032 --rc geninfo_unexecuted_blocks=1 00:07:46.032 00:07:46.032 ' 00:07:46.032 08:04:59 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:46.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.032 --rc genhtml_branch_coverage=1 00:07:46.032 --rc genhtml_function_coverage=1 00:07:46.032 --rc genhtml_legend=1 00:07:46.032 --rc geninfo_all_blocks=1 00:07:46.032 --rc geninfo_unexecuted_blocks=1 00:07:46.032 00:07:46.032 ' 00:07:46.032 08:04:59 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:46.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.032 --rc genhtml_branch_coverage=1 00:07:46.032 --rc genhtml_function_coverage=1 00:07:46.032 --rc genhtml_legend=1 00:07:46.032 --rc geninfo_all_blocks=1 00:07:46.032 --rc geninfo_unexecuted_blocks=1 00:07:46.032 00:07:46.032 ' 00:07:46.032 08:04:59 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:46.032 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.032 --rc genhtml_branch_coverage=1 00:07:46.032 --rc genhtml_function_coverage=1 00:07:46.032 --rc genhtml_legend=1 00:07:46.032 --rc geninfo_all_blocks=1 00:07:46.032 --rc geninfo_unexecuted_blocks=1 00:07:46.032 00:07:46.032 ' 00:07:46.032 08:04:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:46.032 08:04:59 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:46.032 08:04:59 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:46.032 08:04:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.032 08:04:59 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.032 08:04:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:46.032 ************************************ 00:07:46.032 START TEST nvmf_target_core 00:07:46.032 ************************************ 00:07:46.032 08:04:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:07:46.032 * Looking for test storage... 00:07:46.032 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:46.032 08:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:46.032 08:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:07:46.032 08:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:46.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.293 --rc genhtml_branch_coverage=1 00:07:46.293 --rc genhtml_function_coverage=1 00:07:46.293 --rc genhtml_legend=1 00:07:46.293 --rc geninfo_all_blocks=1 00:07:46.293 --rc geninfo_unexecuted_blocks=1 00:07:46.293 00:07:46.293 ' 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:46.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.293 --rc genhtml_branch_coverage=1 00:07:46.293 --rc genhtml_function_coverage=1 00:07:46.293 --rc genhtml_legend=1 00:07:46.293 --rc geninfo_all_blocks=1 00:07:46.293 --rc geninfo_unexecuted_blocks=1 00:07:46.293 00:07:46.293 ' 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:46.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.293 --rc genhtml_branch_coverage=1 00:07:46.293 --rc genhtml_function_coverage=1 00:07:46.293 --rc genhtml_legend=1 00:07:46.293 --rc geninfo_all_blocks=1 00:07:46.293 --rc geninfo_unexecuted_blocks=1 00:07:46.293 00:07:46.293 ' 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:46.293 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.293 --rc genhtml_branch_coverage=1 00:07:46.293 --rc genhtml_function_coverage=1 00:07:46.293 --rc genhtml_legend=1 00:07:46.293 --rc geninfo_all_blocks=1 00:07:46.293 --rc geninfo_unexecuted_blocks=1 00:07:46.293 00:07:46.293 ' 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@50 -- # : 0 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:07:46.293 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@54 -- # have_pci_nics=0 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:07:46.293 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:07:46.294 08:05:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:46.294 08:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.294 08:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.294 08:05:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:46.294 ************************************ 00:07:46.294 START TEST nvmf_abort 00:07:46.294 ************************************ 00:07:46.294 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:07:46.294 * Looking for test storage... 00:07:46.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.294 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:46.294 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:07:46.294 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:46.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.553 --rc genhtml_branch_coverage=1 00:07:46.553 --rc genhtml_function_coverage=1 00:07:46.553 --rc genhtml_legend=1 00:07:46.553 --rc geninfo_all_blocks=1 00:07:46.553 --rc geninfo_unexecuted_blocks=1 00:07:46.553 00:07:46.553 ' 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:46.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.553 --rc genhtml_branch_coverage=1 00:07:46.553 --rc genhtml_function_coverage=1 00:07:46.553 --rc genhtml_legend=1 00:07:46.553 --rc geninfo_all_blocks=1 00:07:46.553 --rc geninfo_unexecuted_blocks=1 00:07:46.553 00:07:46.553 ' 00:07:46.553 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:46.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.553 --rc genhtml_branch_coverage=1 00:07:46.553 --rc genhtml_function_coverage=1 00:07:46.553 --rc genhtml_legend=1 00:07:46.553 --rc geninfo_all_blocks=1 00:07:46.553 --rc geninfo_unexecuted_blocks=1 00:07:46.554 00:07:46.554 ' 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:46.554 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.554 --rc genhtml_branch_coverage=1 00:07:46.554 --rc genhtml_function_coverage=1 00:07:46.554 --rc genhtml_legend=1 00:07:46.554 --rc geninfo_all_blocks=1 00:07:46.554 --rc geninfo_unexecuted_blocks=1 00:07:46.554 00:07:46.554 ' 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:07:46.554 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # xtrace_disable 00:07:46.554 08:05:00 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@131 -- # pci_devs=() 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@131 -- # local -a pci_devs 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@132 -- # pci_net_devs=() 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@133 -- # pci_drivers=() 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@133 -- # local -A pci_drivers 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@135 -- # net_devs=() 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@135 -- # local -ga net_devs 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@136 -- # e810=() 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@136 -- # local -ga e810 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@137 -- # x722=() 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@137 -- # local -ga x722 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@138 -- # mlx=() 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@138 -- # local -ga mlx 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:53.132 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:53.132 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.132 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:53.132 Found net devices under 0000:86:00.0: cvl_0_0 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:53.133 Found net devices under 0000:86:00.1: cvl_0_1 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # is_hw=yes 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@247 -- # create_target_ns 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:07:53.133 10.0.0.1 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:07:53.133 10.0.0.2 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:07:53.133 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 1 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:07:53.134 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.134 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.323 ms 00:07:53.134 00:07:53.134 --- 10.0.0.1 ping statistics --- 00:07:53.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.134 rtt min/avg/max/mdev = 0.323/0.323/0.323/0.000 ms 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:07:53.134 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.134 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:07:53.134 00:07:53.134 --- 10.0.0.2 ping statistics --- 00:07:53.134 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.134 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair++ )) 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # return 0 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator1 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # return 1 00:07:53.134 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev= 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@160 -- # return 0 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target1 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target1 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # return 1 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@159 -- # dev= 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@160 -- # return 0 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:07:53.135 ' 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=1519066 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 1519066 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1519066 ']' 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.135 [2024-11-20 08:05:06.615904] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:07:53.135 [2024-11-20 08:05:06.615945] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.135 [2024-11-20 08:05:06.695493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:53.135 [2024-11-20 08:05:06.738220] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.135 [2024-11-20 08:05:06.738256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.135 [2024-11-20 08:05:06.738263] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.135 [2024-11-20 08:05:06.738269] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.135 [2024-11-20 08:05:06.738274] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.135 [2024-11-20 08:05:06.739684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.135 [2024-11-20 08:05:06.739771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.135 [2024-11-20 08:05:06.739771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.135 [2024-11-20 08:05:06.887715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.135 Malloc0 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.135 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.135 Delay0 00:07:53.136 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.136 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:53.136 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.136 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.136 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.136 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:07:53.136 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.136 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.136 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.136 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:53.136 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.136 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.136 [2024-11-20 08:05:06.975217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.136 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.136 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:53.136 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.136 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:53.136 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.136 08:05:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:07:53.136 [2024-11-20 08:05:07.071244] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:07:55.671 Initializing NVMe Controllers 00:07:55.671 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:55.671 controller IO queue size 128 less than required 00:07:55.671 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:07:55.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:07:55.672 Initialization complete. Launching workers. 00:07:55.672 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37547 00:07:55.672 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37608, failed to submit 62 00:07:55.672 success 37551, unsuccessful 57, failed 0 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:07:55.672 rmmod nvme_tcp 00:07:55.672 rmmod nvme_fabrics 00:07:55.672 rmmod nvme_keyring 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 1519066 ']' 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 1519066 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1519066 ']' 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1519066 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1519066 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1519066' 00:07:55.672 killing process with pid 1519066 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1519066 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1519066 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@254 -- # local dev 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@257 -- # remove_target_ns 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:55.672 08:05:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@258 -- # delete_main_bridge 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@121 -- # return 0 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@274 -- # iptr 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@548 -- # iptables-save 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@548 -- # iptables-restore 00:07:57.576 00:07:57.576 real 0m11.351s 00:07:57.576 user 0m11.707s 00:07:57.576 sys 0m5.441s 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:07:57.576 ************************************ 00:07:57.576 END TEST nvmf_abort 00:07:57.576 ************************************ 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:57.576 08:05:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:57.836 ************************************ 00:07:57.836 START TEST nvmf_ns_hotplug_stress 00:07:57.836 ************************************ 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:07:57.836 * Looking for test storage... 00:07:57.836 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:57.836 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:57.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.837 --rc genhtml_branch_coverage=1 00:07:57.837 --rc genhtml_function_coverage=1 00:07:57.837 --rc genhtml_legend=1 00:07:57.837 --rc geninfo_all_blocks=1 00:07:57.837 --rc geninfo_unexecuted_blocks=1 00:07:57.837 00:07:57.837 ' 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:57.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.837 --rc genhtml_branch_coverage=1 00:07:57.837 --rc genhtml_function_coverage=1 00:07:57.837 --rc genhtml_legend=1 00:07:57.837 --rc geninfo_all_blocks=1 00:07:57.837 --rc geninfo_unexecuted_blocks=1 00:07:57.837 00:07:57.837 ' 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:57.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.837 --rc genhtml_branch_coverage=1 00:07:57.837 --rc genhtml_function_coverage=1 00:07:57.837 --rc genhtml_legend=1 00:07:57.837 --rc geninfo_all_blocks=1 00:07:57.837 --rc geninfo_unexecuted_blocks=1 00:07:57.837 00:07:57.837 ' 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:57.837 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:57.837 --rc genhtml_branch_coverage=1 00:07:57.837 --rc genhtml_function_coverage=1 00:07:57.837 --rc genhtml_legend=1 00:07:57.837 --rc geninfo_all_blocks=1 00:07:57.837 --rc geninfo_unexecuted_blocks=1 00:07:57.837 00:07:57.837 ' 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:07:57.837 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:07:57.837 08:05:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # net_devs=() 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # e810=() 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # local -ga e810 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # x722=() 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # local -ga x722 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # mlx=() 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:04.409 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:04.410 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:04.410 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:04.410 Found net devices under 0000:86:00.0: cvl_0_0 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:04.410 Found net devices under 0000:86:00.1: cvl_0_1 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@247 -- # create_target_ns 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:08:04.410 10.0.0.1 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:08:04.410 10.0.0.2 00:08:04.410 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:08:04.411 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:04.411 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:08:04.411 00:08:04.411 --- 10.0.0.1 ping statistics --- 00:08:04.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.411 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:08:04.411 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:04.411 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:08:04.411 00:08:04.411 --- 10.0.0.2 ping statistics --- 00:08:04.411 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:04.411 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # return 0 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:08:04.411 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # return 1 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev= 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@160 -- # return 0 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # return 1 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev= 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@160 -- # return 0 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:08:04.412 ' 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:08:04.412 08:05:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=1523109 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 1523109 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1523109 ']' 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:04.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:04.412 [2024-11-20 08:05:18.062730] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:08:04.412 [2024-11-20 08:05:18.062776] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:04.412 [2024-11-20 08:05:18.141624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:04.412 [2024-11-20 08:05:18.182640] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:04.412 [2024-11-20 08:05:18.182679] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:04.412 [2024-11-20 08:05:18.182686] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:04.412 [2024-11-20 08:05:18.182691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:04.412 [2024-11-20 08:05:18.182696] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:04.412 [2024-11-20 08:05:18.184143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:04.412 [2024-11-20 08:05:18.184249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:04.412 [2024-11-20 08:05:18.184250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:08:04.412 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:04.671 [2024-11-20 08:05:18.492323] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:04.671 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:04.930 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:04.930 [2024-11-20 08:05:18.893724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.930 08:05:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.188 08:05:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:08:05.447 Malloc0 00:08:05.448 08:05:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:05.706 Delay0 00:08:05.706 08:05:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:05.706 08:05:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:08:05.965 NULL1 00:08:05.965 08:05:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:08:06.224 08:05:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1523591 00:08:06.224 08:05:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:08:06.224 08:05:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:06.224 08:05:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:07.603 Read completed with error (sct=0, sc=11) 00:08:07.603 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:07.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.603 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:07.604 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:08:07.604 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:08:07.862 true 00:08:07.862 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:07.862 08:05:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:08.800 08:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:08.800 08:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:08:08.800 08:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:08:09.059 true 00:08:09.059 08:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:09.059 08:05:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:09.318 08:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:09.577 08:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:08:09.577 08:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:08:09.577 true 00:08:09.577 08:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:09.577 08:05:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:10.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.955 08:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:10.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.955 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:10.955 08:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:08:10.955 08:05:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:08:11.214 true 00:08:11.214 08:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:11.214 08:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:11.214 08:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:11.473 08:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:08:11.473 08:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:08:11.731 true 00:08:11.731 08:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:11.731 08:05:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:13.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.109 08:05:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:13.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.109 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:13.109 08:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:08:13.109 08:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:08:13.368 true 00:08:13.368 08:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:13.368 08:05:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.320 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.320 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:08:14.320 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:08:14.578 true 00:08:14.578 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:14.578 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:14.837 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:14.837 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:08:14.837 08:05:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:08:15.096 true 00:08:15.096 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:15.096 08:05:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:16.032 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.291 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:16.291 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.291 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.291 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.291 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.291 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:16.291 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:08:16.291 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:08:16.549 true 00:08:16.549 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:16.549 08:05:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:17.485 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:17.485 08:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:17.485 08:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:08:17.744 08:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:08:17.744 true 00:08:17.744 08:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:17.744 08:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:18.003 08:05:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:18.262 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:08:18.262 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:08:18.262 true 00:08:18.522 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:18.522 08:05:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:19.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.458 08:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:19.458 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.717 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:19.717 08:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:08:19.717 08:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:08:19.976 true 00:08:19.976 08:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:19.976 08:05:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:20.913 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.913 08:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:20.913 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:20.913 08:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:08:20.913 08:05:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:08:21.172 true 00:08:21.172 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:21.172 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:21.433 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:21.729 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:08:21.729 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:08:21.729 true 00:08:21.729 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:21.729 08:05:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.169 08:05:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:23.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.169 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:23.169 08:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:08:23.169 08:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:08:23.428 true 00:08:23.428 08:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:23.428 08:05:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:23.994 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.252 08:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.252 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:24.252 08:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:08:24.252 08:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:08:24.511 true 00:08:24.511 08:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:24.511 08:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:24.770 08:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:24.770 08:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:08:24.770 08:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:08:25.029 true 00:08:25.029 08:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:25.029 08:05:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:26.406 08:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:26.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.406 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:26.406 08:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:08:26.406 08:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:08:26.665 true 00:08:26.665 08:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:26.665 08:05:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:27.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.601 08:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:27.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:27.601 08:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:08:27.601 08:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:08:27.859 true 00:08:27.859 08:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:27.859 08:05:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:28.117 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:28.376 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:08:28.376 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:08:28.376 true 00:08:28.635 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:28.635 08:05:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:29.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.570 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:29.570 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:29.829 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:08:29.829 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:08:30.087 true 00:08:30.087 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:30.087 08:05:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.024 08:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.024 08:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:08:31.024 08:05:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:08:31.282 true 00:08:31.282 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:31.282 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:31.540 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:31.798 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:08:31.798 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:08:31.798 true 00:08:31.798 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:31.798 08:05:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:33.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.176 08:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:33.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.176 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:33.176 08:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:08:33.176 08:05:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:08:33.176 true 00:08:33.435 08:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:33.435 08:05:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.001 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:34.001 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:34.260 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:08:34.260 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:08:34.519 true 00:08:34.519 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:34.519 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:34.778 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:35.036 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:08:35.036 08:05:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:08:35.036 true 00:08:35.036 08:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:35.037 08:05:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:36.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.413 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:36.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.413 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:08:36.413 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:08:36.413 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:08:36.671 true 00:08:36.671 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:36.671 08:05:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:37.607 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:37.607 Initializing NVMe Controllers 00:08:37.607 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:37.607 Controller IO queue size 128, less than required. 00:08:37.607 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:37.607 Controller IO queue size 128, less than required. 00:08:37.607 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:37.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:37.607 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:08:37.607 Initialization complete. Launching workers. 00:08:37.607 ======================================================== 00:08:37.607 Latency(us) 00:08:37.607 Device Information : IOPS MiB/s Average min max 00:08:37.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2211.89 1.08 42180.81 2997.77 1012130.47 00:08:37.607 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18266.09 8.92 7007.33 1564.72 371618.44 00:08:37.607 ======================================================== 00:08:37.607 Total : 20477.98 10.00 10806.52 1564.72 1012130.47 00:08:37.607 00:08:37.607 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:08:37.607 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:08:37.866 true 00:08:37.866 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1523591 00:08:37.866 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1523591) - No such process 00:08:37.866 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1523591 00:08:37.866 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.126 08:05:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:38.384 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:08:38.384 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:08:38.384 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:08:38.384 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:38.384 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:08:38.384 null0 00:08:38.384 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:38.384 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:38.384 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:08:38.642 null1 00:08:38.642 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:38.642 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:38.642 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:08:38.901 null2 00:08:38.901 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:38.901 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:38.901 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:08:39.160 null3 00:08:39.160 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:39.160 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:39.160 08:05:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:08:39.160 null4 00:08:39.160 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:39.160 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:39.160 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:08:39.419 null5 00:08:39.419 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:39.419 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:39.419 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:08:39.678 null6 00:08:39.678 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:39.678 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:39.678 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:08:39.937 null7 00:08:39.937 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:08:39.937 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:08:39.937 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:08:39.937 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:39.937 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:39.937 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:39.937 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:08:39.937 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:39.937 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:08:39.937 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:39.937 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.937 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:39.937 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:39.937 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:39.937 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:08:39.937 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:39.937 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1529213 1529214 1529216 1529218 1529220 1529222 1529223 1529225 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:39.938 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:40.197 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:40.198 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.198 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:40.198 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:40.198 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:40.198 08:05:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.198 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:40.456 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:40.456 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.456 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:40.456 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:40.456 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:40.456 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:40.456 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:40.456 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:40.715 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.715 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.715 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:40.715 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.715 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.715 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:40.715 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.715 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.715 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:40.715 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.715 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.715 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:40.715 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.715 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.715 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:40.716 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.716 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.716 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:40.716 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.716 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.716 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:40.716 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:40.716 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:40.716 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:40.974 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:40.974 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:40.974 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:40.974 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:40.974 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:40.974 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:40.974 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:40.974 08:05:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:41.234 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.234 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.234 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:41.234 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.234 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.234 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:41.234 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.234 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.234 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:41.234 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.234 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.234 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:41.234 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.234 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.234 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:41.234 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.234 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.235 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:41.235 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.235 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.235 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:41.235 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.235 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.235 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:41.235 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:41.235 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.235 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:41.235 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:41.235 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:41.235 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:41.235 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:41.235 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:41.494 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:41.495 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:41.495 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:41.753 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:41.753 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:41.753 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:41.753 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:41.754 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:41.754 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:41.754 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:41.754 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.013 08:05:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:42.272 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:42.272 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:42.272 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:42.272 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.273 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:42.531 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:42.531 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:42.531 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:42.531 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:42.531 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:42.531 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:42.531 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:42.531 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:42.789 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:43.048 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:43.048 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:43.048 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:43.048 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.048 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:43.048 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:43.048 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:43.048 08:05:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:43.048 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.048 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.048 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:43.308 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:43.567 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:08:43.826 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:43.826 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:08:43.826 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:08:43.826 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:08:43.826 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:08:43.826 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:08:43.826 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:08:43.826 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:08:44.085 rmmod nvme_tcp 00:08:44.085 rmmod nvme_fabrics 00:08:44.085 rmmod nvme_keyring 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:08:44.085 08:05:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:08:44.085 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:08:44.085 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 1523109 ']' 00:08:44.085 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 1523109 00:08:44.085 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1523109 ']' 00:08:44.085 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1523109 00:08:44.085 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:08:44.085 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:44.085 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1523109 00:08:44.085 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:44.085 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:44.085 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1523109' 00:08:44.085 killing process with pid 1523109 00:08:44.085 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1523109 00:08:44.085 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1523109 00:08:44.344 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:08:44.344 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:08:44.344 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@254 -- # local dev 00:08:44.344 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@257 -- # remove_target_ns 00:08:44.344 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:44.344 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:44.344 08:05:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:46.879 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@258 -- # delete_main_bridge 00:08:46.879 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:46.879 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # return 0 00:08:46.879 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@274 -- # iptr 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-save 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-restore 00:08:46.880 00:08:46.880 real 0m48.681s 00:08:46.880 user 3m16.590s 00:08:46.880 sys 0m15.884s 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:08:46.880 ************************************ 00:08:46.880 END TEST nvmf_ns_hotplug_stress 00:08:46.880 ************************************ 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:46.880 ************************************ 00:08:46.880 START TEST nvmf_delete_subsystem 00:08:46.880 ************************************ 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:08:46.880 * Looking for test storage... 00:08:46.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:46.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.880 --rc genhtml_branch_coverage=1 00:08:46.880 --rc genhtml_function_coverage=1 00:08:46.880 --rc genhtml_legend=1 00:08:46.880 --rc geninfo_all_blocks=1 00:08:46.880 --rc geninfo_unexecuted_blocks=1 00:08:46.880 00:08:46.880 ' 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:46.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.880 --rc genhtml_branch_coverage=1 00:08:46.880 --rc genhtml_function_coverage=1 00:08:46.880 --rc genhtml_legend=1 00:08:46.880 --rc geninfo_all_blocks=1 00:08:46.880 --rc geninfo_unexecuted_blocks=1 00:08:46.880 00:08:46.880 ' 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:46.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.880 --rc genhtml_branch_coverage=1 00:08:46.880 --rc genhtml_function_coverage=1 00:08:46.880 --rc genhtml_legend=1 00:08:46.880 --rc geninfo_all_blocks=1 00:08:46.880 --rc geninfo_unexecuted_blocks=1 00:08:46.880 00:08:46.880 ' 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:46.880 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:46.880 --rc genhtml_branch_coverage=1 00:08:46.880 --rc genhtml_function_coverage=1 00:08:46.880 --rc genhtml_legend=1 00:08:46.880 --rc geninfo_all_blocks=1 00:08:46.880 --rc geninfo_unexecuted_blocks=1 00:08:46.880 00:08:46.880 ' 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:46.880 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:46.881 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # xtrace_disable 00:08:46.881 08:06:00 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.458 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:53.458 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # pci_devs=() 00:08:53.458 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:08:53.458 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:08:53.458 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:08:53.458 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:08:53.458 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # net_devs=() 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # e810=() 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # local -ga e810 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # x722=() 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # local -ga x722 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # mlx=() 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # local -ga mlx 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:53.459 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:53.459 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:53.459 Found net devices under 0000:86:00.0: cvl_0_0 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:53.459 Found net devices under 0000:86:00.1: cvl_0_1 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # is_hw=yes 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@247 -- # create_target_ns 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:08:53.459 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:08:53.460 10.0.0.1 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:08:53.460 10.0.0.2 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:08:53.460 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:53.460 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:08:53.460 00:08:53.460 --- 10.0.0.1 ping statistics --- 00:08:53.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.460 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:08:53.460 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:53.460 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:08:53.460 00:08:53.460 --- 10.0.0.2 ping statistics --- 00:08:53.460 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:53.460 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:53.460 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # return 0 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # return 1 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev= 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@160 -- # return 0 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target1 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # return 1 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev= 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@160 -- # return 0 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:08:53.461 ' 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=1534150 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 1534150 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1534150 ']' 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:53.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:53.461 08:06:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.461 [2024-11-20 08:06:06.803027] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:08:53.461 [2024-11-20 08:06:06.803072] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:53.461 [2024-11-20 08:06:06.882174] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:08:53.461 [2024-11-20 08:06:06.925657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:53.461 [2024-11-20 08:06:06.925692] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:53.461 [2024-11-20 08:06:06.925700] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:53.461 [2024-11-20 08:06:06.925707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:53.461 [2024-11-20 08:06:06.925713] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:53.461 [2024-11-20 08:06:06.926987] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:53.461 [2024-11-20 08:06:06.926990] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:53.461 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:53.461 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:08:53.461 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:08:53.461 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:53.461 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.461 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:53.461 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.462 [2024-11-20 08:06:07.070867] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.462 [2024-11-20 08:06:07.091069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.462 NULL1 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.462 Delay0 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1534382 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:53.462 08:06:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:08:53.462 [2024-11-20 08:06:07.202004] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:55.372 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:55.372 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:55.372 08:06:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.372 Write completed with error (sct=0, sc=8) 00:08:55.372 Read completed with error (sct=0, sc=8) 00:08:55.372 starting I/O failed: -6 00:08:55.373 Write completed with error (sct=0, sc=8) 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 starting I/O failed: -6 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 starting I/O failed: -6 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 Write completed with error (sct=0, sc=8) 00:08:55.373 starting I/O failed: -6 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 starting I/O failed: -6 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 starting I/O failed: -6 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 starting I/O failed: -6 00:08:55.373 Write completed with error (sct=0, sc=8) 00:08:55.373 starting I/O failed: -6 00:08:55.373 starting I/O failed: -6 00:08:55.373 starting I/O failed: -6 00:08:55.373 starting I/O failed: -6 00:08:55.373 starting I/O failed: -6 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 starting I/O failed: -6 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 Write completed with error (sct=0, sc=8) 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 Write completed with error (sct=0, sc=8) 00:08:55.373 starting I/O failed: -6 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 Write completed with error (sct=0, sc=8) 00:08:55.373 Write completed with error (sct=0, sc=8) 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 starting I/O failed: -6 00:08:55.373 Write completed with error (sct=0, sc=8) 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 starting I/O failed: -6 00:08:55.373 Write completed with error (sct=0, sc=8) 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 starting I/O failed: -6 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 starting I/O failed: -6 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 Read completed with error (sct=0, sc=8) 00:08:55.373 Write completed with error (sct=0, sc=8) 00:08:55.373 Write completed with error (sct=0, sc=8) 00:08:55.373 starting I/O failed: -6 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 starting I/O failed: -6 00:08:55.374 Write completed with error (sct=0, sc=8) 00:08:55.374 Write completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 starting I/O failed: -6 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Write completed with error (sct=0, sc=8) 00:08:55.374 starting I/O failed: -6 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 [2024-11-20 08:06:09.241940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7bdc000c40 is same with the state(6) to be set 00:08:55.374 Write completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Write completed with error (sct=0, sc=8) 00:08:55.374 Write completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Write completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Write completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Write completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Write completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Read completed with error (sct=0, sc=8) 00:08:55.374 Write completed with error (sct=0, sc=8) 00:08:55.375 Read completed with error (sct=0, sc=8) 00:08:55.375 Read completed with error (sct=0, sc=8) 00:08:55.375 Read completed with error (sct=0, sc=8) 00:08:55.375 Write completed with error (sct=0, sc=8) 00:08:55.375 Read completed with error (sct=0, sc=8) 00:08:55.375 Read completed with error (sct=0, sc=8) 00:08:55.375 Write completed with error (sct=0, sc=8) 00:08:55.375 Read completed with error (sct=0, sc=8) 00:08:55.375 Write completed with error (sct=0, sc=8) 00:08:55.375 Read completed with error (sct=0, sc=8) 00:08:55.375 Read completed with error (sct=0, sc=8) 00:08:55.375 Read completed with error (sct=0, sc=8) 00:08:55.375 Read completed with error (sct=0, sc=8) 00:08:55.375 Read completed with error (sct=0, sc=8) 00:08:55.375 Read completed with error (sct=0, sc=8) 00:08:55.375 Write completed with error (sct=0, sc=8) 00:08:55.375 Read completed with error (sct=0, sc=8) 00:08:55.375 Write completed with error (sct=0, sc=8) 00:08:55.375 Read completed with error (sct=0, sc=8) 00:08:55.375 Write completed with error (sct=0, sc=8) 00:08:55.375 Write completed with error (sct=0, sc=8) 00:08:55.375 Read completed with error (sct=0, sc=8) 00:08:56.318 [2024-11-20 08:06:10.214005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98b9a0 is same with the state(6) to be set 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Write completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Write completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Write completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Write completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Write completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Write completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Write completed with error (sct=0, sc=8) 00:08:56.318 Write completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Write completed with error (sct=0, sc=8) 00:08:56.318 Write completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Write completed with error (sct=0, sc=8) 00:08:56.318 Write completed with error (sct=0, sc=8) 00:08:56.318 Write completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Write completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Write completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 [2024-11-20 08:06:10.241745] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a860 is same with the state(6) to be set 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Write completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Write completed with error (sct=0, sc=8) 00:08:56.318 Write completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Write completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.318 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Write completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 [2024-11-20 08:06:10.241929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x98a4a0 is same with the state(6) to be set 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Write completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Write completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 [2024-11-20 08:06:10.244531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7bdc00d7e0 is same with the state(6) to be set 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Write completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Write completed with error (sct=0, sc=8) 00:08:56.319 Write completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Write completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 Read completed with error (sct=0, sc=8) 00:08:56.319 [2024-11-20 08:06:10.245417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7bdc00d020 is same with the state(6) to be set 00:08:56.319 Initializing NVMe Controllers 00:08:56.319 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:56.319 Controller IO queue size 128, less than required. 00:08:56.319 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:56.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:08:56.319 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:08:56.319 Initialization complete. Launching workers. 00:08:56.319 ======================================================== 00:08:56.319 Latency(us) 00:08:56.319 Device Information : IOPS MiB/s Average min max 00:08:56.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.67 0.09 899552.86 331.12 1007043.16 00:08:56.319 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153.32 0.07 936708.68 250.73 1010426.28 00:08:56.319 ======================================================== 00:08:56.319 Total : 340.98 0.17 916259.42 250.73 1010426.28 00:08:56.319 00:08:56.319 [2024-11-20 08:06:10.245939] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x98b9a0 (9): Bad file descriptor 00:08:56.319 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:08:56.319 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.319 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:08:56.319 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1534382 00:08:56.319 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:08:56.885 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1534382 00:08:56.886 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1534382) - No such process 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1534382 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1534382 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1534382 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.886 [2024-11-20 08:06:10.773781] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1534854 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1534854 00:08:56.886 08:06:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:56.886 [2024-11-20 08:06:10.863714] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:08:57.452 08:06:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:57.452 08:06:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1534854 00:08:57.452 08:06:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:58.019 08:06:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:58.019 08:06:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1534854 00:08:58.019 08:06:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:58.586 08:06:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:58.586 08:06:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1534854 00:08:58.586 08:06:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:58.873 08:06:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:58.873 08:06:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1534854 00:08:58.873 08:06:12 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:08:59.440 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:08:59.440 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1534854 00:08:59.440 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:00.026 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:00.026 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1534854 00:09:00.026 08:06:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:09:00.026 Initializing NVMe Controllers 00:09:00.026 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:00.026 Controller IO queue size 128, less than required. 00:09:00.026 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:00.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:09:00.026 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:09:00.026 Initialization complete. Launching workers. 00:09:00.026 ======================================================== 00:09:00.026 Latency(us) 00:09:00.026 Device Information : IOPS MiB/s Average min max 00:09:00.026 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002067.57 1000159.98 1006640.16 00:09:00.026 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003835.25 1000293.29 1041493.40 00:09:00.026 ======================================================== 00:09:00.026 Total : 256.00 0.12 1002951.41 1000159.98 1041493.40 00:09:00.026 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1534854 00:09:00.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1534854) - No such process 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1534854 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:09:00.594 rmmod nvme_tcp 00:09:00.594 rmmod nvme_fabrics 00:09:00.594 rmmod nvme_keyring 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 1534150 ']' 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 1534150 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1534150 ']' 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1534150 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1534150 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1534150' 00:09:00.594 killing process with pid 1534150 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1534150 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1534150 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@254 -- # local dev 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@257 -- # remove_target_ns 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:00.594 08:06:14 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@258 -- # delete_main_bridge 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # return 0 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@274 -- # iptr 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-save 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-restore 00:09:03.127 00:09:03.127 real 0m16.299s 00:09:03.127 user 0m29.136s 00:09:03.127 sys 0m5.559s 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:09:03.127 ************************************ 00:09:03.127 END TEST nvmf_delete_subsystem 00:09:03.127 ************************************ 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:03.127 ************************************ 00:09:03.127 START TEST nvmf_host_management 00:09:03.127 ************************************ 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:09:03.127 * Looking for test storage... 00:09:03.127 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:09:03.127 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:03.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.128 --rc genhtml_branch_coverage=1 00:09:03.128 --rc genhtml_function_coverage=1 00:09:03.128 --rc genhtml_legend=1 00:09:03.128 --rc geninfo_all_blocks=1 00:09:03.128 --rc geninfo_unexecuted_blocks=1 00:09:03.128 00:09:03.128 ' 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:03.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.128 --rc genhtml_branch_coverage=1 00:09:03.128 --rc genhtml_function_coverage=1 00:09:03.128 --rc genhtml_legend=1 00:09:03.128 --rc geninfo_all_blocks=1 00:09:03.128 --rc geninfo_unexecuted_blocks=1 00:09:03.128 00:09:03.128 ' 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:03.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.128 --rc genhtml_branch_coverage=1 00:09:03.128 --rc genhtml_function_coverage=1 00:09:03.128 --rc genhtml_legend=1 00:09:03.128 --rc geninfo_all_blocks=1 00:09:03.128 --rc geninfo_unexecuted_blocks=1 00:09:03.128 00:09:03.128 ' 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:03.128 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:03.128 --rc genhtml_branch_coverage=1 00:09:03.128 --rc genhtml_function_coverage=1 00:09:03.128 --rc genhtml_legend=1 00:09:03.128 --rc geninfo_all_blocks=1 00:09:03.128 --rc geninfo_unexecuted_blocks=1 00:09:03.128 00:09:03.128 ' 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:03.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:09:03.128 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:03.129 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:09:03.129 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:09:03.129 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:09:03.129 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:03.129 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:03.129 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:03.129 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:09:03.129 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:09:03.129 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # xtrace_disable 00:09:03.129 08:06:16 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:09.699 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:09.699 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@131 -- # pci_devs=() 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@131 -- # local -a pci_devs 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@132 -- # pci_net_devs=() 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@133 -- # pci_drivers=() 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@133 -- # local -A pci_drivers 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@135 -- # net_devs=() 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@135 -- # local -ga net_devs 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@136 -- # e810=() 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@136 -- # local -ga e810 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@137 -- # x722=() 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@137 -- # local -ga x722 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@138 -- # mlx=() 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@138 -- # local -ga mlx 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:09.700 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:09.700 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:09.700 Found net devices under 0000:86:00.0: cvl_0_0 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:09.700 Found net devices under 0000:86:00.1: cvl_0_1 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # is_hw=yes 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@247 -- # create_target_ns 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:09:09.700 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:09:09.701 10.0.0.1 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:09:09.701 10.0.0.2 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 1 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:09:09.701 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:09.701 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.352 ms 00:09:09.701 00:09:09.701 --- 10.0.0.1 ping statistics --- 00:09:09.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.701 rtt min/avg/max/mdev = 0.352/0.352/0.352/0.000 ms 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:09.701 08:06:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:09:09.701 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:09:09.701 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:09:09.701 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:09:09.701 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:09:09.701 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:09:09.701 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:09:09.701 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:09:09.701 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:09.701 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:09:09.701 00:09:09.701 --- 10.0.0.2 ping statistics --- 00:09:09.701 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:09.701 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:09:09.701 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair++ )) 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # return 0 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator1 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # return 1 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev= 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@160 -- # return 0 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target1 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target1 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # return 1 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@159 -- # dev= 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@160 -- # return 0 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:09:09.702 ' 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=1539113 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 1539113 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1539113 ']' 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:09.702 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:09.702 08:06:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:09.702 [2024-11-20 08:06:23.168217] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:09:09.702 [2024-11-20 08:06:23.168265] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:09.702 [2024-11-20 08:06:23.248645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:09.702 [2024-11-20 08:06:23.288982] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:09.702 [2024-11-20 08:06:23.289020] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:09.702 [2024-11-20 08:06:23.289026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:09.702 [2024-11-20 08:06:23.289032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:09.702 [2024-11-20 08:06:23.289037] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:09.703 [2024-11-20 08:06:23.290538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:09.703 [2024-11-20 08:06:23.290649] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:09.703 [2024-11-20 08:06:23.290755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:09.703 [2024-11-20 08:06:23.290756] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:10.272 [2024-11-20 08:06:24.044890] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:10.272 Malloc0 00:09:10.272 [2024-11-20 08:06:24.124061] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1539382 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1539382 /var/tmp/bdevperf.sock 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1539382 ']' 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:10.272 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:09:10.272 { 00:09:10.272 "params": { 00:09:10.272 "name": "Nvme$subsystem", 00:09:10.272 "trtype": "$TEST_TRANSPORT", 00:09:10.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:10.272 "adrfam": "ipv4", 00:09:10.272 "trsvcid": "$NVMF_PORT", 00:09:10.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:10.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:10.272 "hdgst": ${hdgst:-false}, 00:09:10.272 "ddgst": ${ddgst:-false} 00:09:10.272 }, 00:09:10.272 "method": "bdev_nvme_attach_controller" 00:09:10.272 } 00:09:10.272 EOF 00:09:10.272 )") 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:09:10.272 08:06:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:09:10.272 "params": { 00:09:10.272 "name": "Nvme0", 00:09:10.272 "trtype": "tcp", 00:09:10.272 "traddr": "10.0.0.2", 00:09:10.272 "adrfam": "ipv4", 00:09:10.272 "trsvcid": "4420", 00:09:10.272 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:10.272 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:10.272 "hdgst": false, 00:09:10.272 "ddgst": false 00:09:10.272 }, 00:09:10.272 "method": "bdev_nvme_attach_controller" 00:09:10.272 }' 00:09:10.272 [2024-11-20 08:06:24.221807] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:09:10.272 [2024-11-20 08:06:24.221854] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1539382 ] 00:09:10.532 [2024-11-20 08:06:24.299450] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:10.532 [2024-11-20 08:06:24.340323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:10.532 Running I/O for 10 seconds... 00:09:11.099 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:11.099 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:09:11.099 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:09:11.099 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.099 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:11.099 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.099 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:11.099 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:09:11.099 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:09:11.099 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:09:11.099 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:09:11.099 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:09:11.099 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:09:11.099 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:09:11.099 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:09:11.099 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.099 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:11.099 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:09:11.099 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.360 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1155 00:09:11.360 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1155 -ge 100 ']' 00:09:11.360 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:09:11.360 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:09:11.360 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:09:11.360 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:11.360 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.360 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:11.360 [2024-11-20 08:06:25.144457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:09:11.360 [2024-11-20 08:06:25.144496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:09:11.360 [2024-11-20 08:06:25.144514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:09:11.360 [2024-11-20 08:06:25.144528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144535] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:09:11.360 [2024-11-20 08:06:25.144541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13e5500 is same with the state(6) to be set 00:09:11.360 [2024-11-20 08:06:25.144588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.360 [2024-11-20 08:06:25.144607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.360 [2024-11-20 08:06:25.144627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.360 [2024-11-20 08:06:25.144641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.360 [2024-11-20 08:06:25.144656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.360 [2024-11-20 08:06:25.144670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.360 [2024-11-20 08:06:25.144684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.360 [2024-11-20 08:06:25.144698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.360 [2024-11-20 08:06:25.144718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.360 [2024-11-20 08:06:25.144733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.360 [2024-11-20 08:06:25.144747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.360 [2024-11-20 08:06:25.144762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.360 [2024-11-20 08:06:25.144777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.360 [2024-11-20 08:06:25.144791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.360 [2024-11-20 08:06:25.144809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.360 [2024-11-20 08:06:25.144823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.360 [2024-11-20 08:06:25.144838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.360 [2024-11-20 08:06:25.144853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.360 [2024-11-20 08:06:25.144867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.360 [2024-11-20 08:06:25.144881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.360 [2024-11-20 08:06:25.144889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.144896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.144904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.144911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.144919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.144925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.144933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.144940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.144948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.144954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.144962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.144969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.144976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.144984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.144992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.144998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.361 [2024-11-20 08:06:25.145437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.361 [2024-11-20 08:06:25.145444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.362 [2024-11-20 08:06:25.145451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.362 [2024-11-20 08:06:25.145458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.362 [2024-11-20 08:06:25.145466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.362 [2024-11-20 08:06:25.145472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.362 [2024-11-20 08:06:25.145480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.362 [2024-11-20 08:06:25.145487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.362 [2024-11-20 08:06:25.145495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.362 [2024-11-20 08:06:25.145501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.362 [2024-11-20 08:06:25.145509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.362 [2024-11-20 08:06:25.145516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.362 [2024-11-20 08:06:25.145524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.362 [2024-11-20 08:06:25.145530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.362 [2024-11-20 08:06:25.145539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:09:11.362 [2024-11-20 08:06:25.145546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:09:11.362 [2024-11-20 08:06:25.145554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15fe810 is same with the state(6) to be set 00:09:11.362 [2024-11-20 08:06:25.146513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:09:11.362 task offset: 32640 on job bdev=Nvme0n1 fails 00:09:11.362 00:09:11.362 Latency(us) 00:09:11.362 [2024-11-20T07:06:25.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:11.362 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:11.362 Job: Nvme0n1 ended in about 0.60 seconds with error 00:09:11.362 Verification LBA range: start 0x0 length 0x400 00:09:11.362 Nvme0n1 : 0.60 2016.21 126.01 106.12 0.00 29537.04 4556.31 26588.89 00:09:11.362 [2024-11-20T07:06:25.390Z] =================================================================================================================== 00:09:11.362 [2024-11-20T07:06:25.390Z] Total : 2016.21 126.01 106.12 0.00 29537.04 4556.31 26588.89 00:09:11.362 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.362 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:09:11.362 [2024-11-20 08:06:25.148871] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:09:11.362 [2024-11-20 08:06:25.148892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e5500 (9): Bad file descriptor 00:09:11.362 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:11.362 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:11.362 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:11.362 08:06:25 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:09:11.362 [2024-11-20 08:06:25.200185] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:09:12.297 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1539382 00:09:12.297 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1539382) - No such process 00:09:12.297 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:09:12.297 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:09:12.297 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:09:12.297 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:09:12.297 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:09:12.297 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:09:12.297 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:09:12.297 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:09:12.297 { 00:09:12.297 "params": { 00:09:12.297 "name": "Nvme$subsystem", 00:09:12.297 "trtype": "$TEST_TRANSPORT", 00:09:12.297 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:12.297 "adrfam": "ipv4", 00:09:12.297 "trsvcid": "$NVMF_PORT", 00:09:12.297 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:12.297 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:12.297 "hdgst": ${hdgst:-false}, 00:09:12.297 "ddgst": ${ddgst:-false} 00:09:12.297 }, 00:09:12.297 "method": "bdev_nvme_attach_controller" 00:09:12.297 } 00:09:12.297 EOF 00:09:12.297 )") 00:09:12.297 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:09:12.297 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:09:12.297 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:09:12.297 08:06:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:09:12.297 "params": { 00:09:12.297 "name": "Nvme0", 00:09:12.297 "trtype": "tcp", 00:09:12.297 "traddr": "10.0.0.2", 00:09:12.297 "adrfam": "ipv4", 00:09:12.297 "trsvcid": "4420", 00:09:12.297 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:12.297 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:09:12.297 "hdgst": false, 00:09:12.297 "ddgst": false 00:09:12.297 }, 00:09:12.297 "method": "bdev_nvme_attach_controller" 00:09:12.297 }' 00:09:12.297 [2024-11-20 08:06:26.211364] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:09:12.297 [2024-11-20 08:06:26.211412] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1539640 ] 00:09:12.297 [2024-11-20 08:06:26.286879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.555 [2024-11-20 08:06:26.325353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.555 Running I/O for 1 seconds... 00:09:13.930 1995.00 IOPS, 124.69 MiB/s 00:09:13.930 Latency(us) 00:09:13.930 [2024-11-20T07:06:27.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:13.930 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:09:13.930 Verification LBA range: start 0x0 length 0x400 00:09:13.930 Nvme0n1 : 1.01 2045.59 127.85 0.00 0.00 30704.49 1677.41 26464.06 00:09:13.931 [2024-11-20T07:06:27.959Z] =================================================================================================================== 00:09:13.931 [2024-11-20T07:06:27.959Z] Total : 2045.59 127.85 0.00 0.00 30704.49 1677.41 26464.06 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:09:13.931 rmmod nvme_tcp 00:09:13.931 rmmod nvme_fabrics 00:09:13.931 rmmod nvme_keyring 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 1539113 ']' 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 1539113 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1539113 ']' 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1539113 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1539113 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1539113' 00:09:13.931 killing process with pid 1539113 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1539113 00:09:13.931 08:06:27 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1539113 00:09:14.190 [2024-11-20 08:06:27.982130] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:09:14.190 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:09:14.190 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:09:14.190 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@254 -- # local dev 00:09:14.190 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@257 -- # remove_target_ns 00:09:14.190 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:14.190 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:14.190 08:06:28 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@258 -- # delete_main_bridge 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@121 -- # return 0 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@274 -- # iptr 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-save 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-restore 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:09:16.094 00:09:16.094 real 0m13.329s 00:09:16.094 user 0m23.231s 00:09:16.094 sys 0m5.731s 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:16.094 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:09:16.094 ************************************ 00:09:16.094 END TEST nvmf_host_management 00:09:16.094 ************************************ 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:16.356 ************************************ 00:09:16.356 START TEST nvmf_lvol 00:09:16.356 ************************************ 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:09:16.356 * Looking for test storage... 00:09:16.356 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:16.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.356 --rc genhtml_branch_coverage=1 00:09:16.356 --rc genhtml_function_coverage=1 00:09:16.356 --rc genhtml_legend=1 00:09:16.356 --rc geninfo_all_blocks=1 00:09:16.356 --rc geninfo_unexecuted_blocks=1 00:09:16.356 00:09:16.356 ' 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:16.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.356 --rc genhtml_branch_coverage=1 00:09:16.356 --rc genhtml_function_coverage=1 00:09:16.356 --rc genhtml_legend=1 00:09:16.356 --rc geninfo_all_blocks=1 00:09:16.356 --rc geninfo_unexecuted_blocks=1 00:09:16.356 00:09:16.356 ' 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:16.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.356 --rc genhtml_branch_coverage=1 00:09:16.356 --rc genhtml_function_coverage=1 00:09:16.356 --rc genhtml_legend=1 00:09:16.356 --rc geninfo_all_blocks=1 00:09:16.356 --rc geninfo_unexecuted_blocks=1 00:09:16.356 00:09:16.356 ' 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:16.356 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:16.356 --rc genhtml_branch_coverage=1 00:09:16.356 --rc genhtml_function_coverage=1 00:09:16.356 --rc genhtml_legend=1 00:09:16.356 --rc geninfo_all_blocks=1 00:09:16.356 --rc geninfo_unexecuted_blocks=1 00:09:16.356 00:09:16.356 ' 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:09:16.356 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:16.357 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # xtrace_disable 00:09:16.357 08:06:30 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@131 -- # pci_devs=() 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@131 -- # local -a pci_devs 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@132 -- # pci_net_devs=() 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@133 -- # pci_drivers=() 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@133 -- # local -A pci_drivers 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@135 -- # net_devs=() 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@135 -- # local -ga net_devs 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@136 -- # e810=() 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@136 -- # local -ga e810 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@137 -- # x722=() 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@137 -- # local -ga x722 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@138 -- # mlx=() 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@138 -- # local -ga mlx 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:23.019 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:23.019 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:23.019 Found net devices under 0000:86:00.0: cvl_0_0 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:23.019 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:23.020 Found net devices under 0000:86:00.1: cvl_0_1 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # is_hw=yes 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@247 -- # create_target_ns 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:09:23.020 10.0.0.1 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:09:23.020 10.0.0.2 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 1 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:09:23.020 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:09:23.021 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:23.021 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.484 ms 00:09:23.021 00:09:23.021 --- 10.0.0.1 ping statistics --- 00:09:23.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.021 rtt min/avg/max/mdev = 0.484/0.484/0.484/0.000 ms 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:09:23.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:23.021 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:09:23.021 00:09:23.021 --- 10.0.0.2 ping statistics --- 00:09:23.021 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:23.021 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair++ )) 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # return 0 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator1 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # return 1 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev= 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@160 -- # return 0 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target1 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target1 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # return 1 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@159 -- # dev= 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@160 -- # return 0 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:09:23.021 ' 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:23.021 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=1543654 00:09:23.022 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:09:23.022 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 1543654 00:09:23.022 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1543654 ']' 00:09:23.022 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:23.022 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:23.022 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:23.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:23.022 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:23.022 08:06:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:23.022 [2024-11-20 08:06:36.573099] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:09:23.022 [2024-11-20 08:06:36.573150] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:23.022 [2024-11-20 08:06:36.653253] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:23.022 [2024-11-20 08:06:36.693280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:23.022 [2024-11-20 08:06:36.693319] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:23.022 [2024-11-20 08:06:36.693327] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:23.022 [2024-11-20 08:06:36.693333] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:23.022 [2024-11-20 08:06:36.693338] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:23.022 [2024-11-20 08:06:36.694722] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:23.022 [2024-11-20 08:06:36.694830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:23.022 [2024-11-20 08:06:36.694831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:23.588 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:23.588 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:09:23.588 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:09:23.588 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:23.588 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:23.588 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:23.588 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:23.847 [2024-11-20 08:06:37.616699] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:23.847 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.106 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:09:24.106 08:06:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:24.106 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:09:24.106 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:09:24.364 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:09:24.622 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=dc17a6b6-e39e-4967-99a3-caefb6ff1a0b 00:09:24.622 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u dc17a6b6-e39e-4967-99a3-caefb6ff1a0b lvol 20 00:09:24.879 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=920669d6-e387-4972-b4e9-d3c5a9d686cb 00:09:24.879 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:25.137 08:06:38 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 920669d6-e387-4972-b4e9-d3c5a9d686cb 00:09:25.137 08:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:25.395 [2024-11-20 08:06:39.291611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.395 08:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:25.652 08:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1544153 00:09:25.652 08:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:09:25.652 08:06:39 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:09:26.588 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 920669d6-e387-4972-b4e9-d3c5a9d686cb MY_SNAPSHOT 00:09:26.846 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=826901a2-9175-4d56-b6fd-feae4090baa4 00:09:26.846 08:06:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 920669d6-e387-4972-b4e9-d3c5a9d686cb 30 00:09:27.104 08:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 826901a2-9175-4d56-b6fd-feae4090baa4 MY_CLONE 00:09:27.362 08:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=09e26fc0-e75e-4467-bfcf-9be92efcb6d9 00:09:27.362 08:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 09e26fc0-e75e-4467-bfcf-9be92efcb6d9 00:09:27.930 08:06:41 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1544153 00:09:36.052 Initializing NVMe Controllers 00:09:36.052 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:09:36.052 Controller IO queue size 128, less than required. 00:09:36.052 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:09:36.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:09:36.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:09:36.052 Initialization complete. Launching workers. 00:09:36.052 ======================================================== 00:09:36.052 Latency(us) 00:09:36.052 Device Information : IOPS MiB/s Average min max 00:09:36.052 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12022.80 46.96 10646.14 1551.58 66493.73 00:09:36.052 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12379.80 48.36 10340.12 3458.13 55416.72 00:09:36.052 ======================================================== 00:09:36.053 Total : 24402.60 95.32 10490.89 1551.58 66493.73 00:09:36.053 00:09:36.053 08:06:49 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:36.312 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 920669d6-e387-4972-b4e9-d3c5a9d686cb 00:09:36.312 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u dc17a6b6-e39e-4967-99a3-caefb6ff1a0b 00:09:36.571 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:09:36.571 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:09:36.571 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:09:36.571 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:09:36.571 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:09:36.571 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:09:36.571 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:09:36.571 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:09:36.571 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:09:36.571 rmmod nvme_tcp 00:09:36.571 rmmod nvme_fabrics 00:09:36.571 rmmod nvme_keyring 00:09:36.571 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:09:36.571 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:09:36.571 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:09:36.571 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 1543654 ']' 00:09:36.571 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 1543654 00:09:36.571 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1543654 ']' 00:09:36.571 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1543654 00:09:36.571 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:09:36.830 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:36.830 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1543654 00:09:36.830 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:36.830 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:36.830 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1543654' 00:09:36.830 killing process with pid 1543654 00:09:36.830 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1543654 00:09:36.830 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1543654 00:09:36.830 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:09:36.830 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:09:36.830 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@254 -- # local dev 00:09:36.830 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@257 -- # remove_target_ns 00:09:36.830 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:36.830 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:36.830 08:06:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@258 -- # delete_main_bridge 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@121 -- # return 0 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@274 -- # iptr 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-save 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-restore 00:09:39.363 00:09:39.363 real 0m22.767s 00:09:39.363 user 1m5.370s 00:09:39.363 sys 0m7.724s 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:09:39.363 ************************************ 00:09:39.363 END TEST nvmf_lvol 00:09:39.363 ************************************ 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:39.363 ************************************ 00:09:39.363 START TEST nvmf_lvs_grow 00:09:39.363 ************************************ 00:09:39.363 08:06:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:09:39.363 * Looking for test storage... 00:09:39.363 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:09:39.363 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:39.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.364 --rc genhtml_branch_coverage=1 00:09:39.364 --rc genhtml_function_coverage=1 00:09:39.364 --rc genhtml_legend=1 00:09:39.364 --rc geninfo_all_blocks=1 00:09:39.364 --rc geninfo_unexecuted_blocks=1 00:09:39.364 00:09:39.364 ' 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:39.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.364 --rc genhtml_branch_coverage=1 00:09:39.364 --rc genhtml_function_coverage=1 00:09:39.364 --rc genhtml_legend=1 00:09:39.364 --rc geninfo_all_blocks=1 00:09:39.364 --rc geninfo_unexecuted_blocks=1 00:09:39.364 00:09:39.364 ' 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:39.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.364 --rc genhtml_branch_coverage=1 00:09:39.364 --rc genhtml_function_coverage=1 00:09:39.364 --rc genhtml_legend=1 00:09:39.364 --rc geninfo_all_blocks=1 00:09:39.364 --rc geninfo_unexecuted_blocks=1 00:09:39.364 00:09:39.364 ' 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:39.364 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.364 --rc genhtml_branch_coverage=1 00:09:39.364 --rc genhtml_function_coverage=1 00:09:39.364 --rc genhtml_legend=1 00:09:39.364 --rc geninfo_all_blocks=1 00:09:39.364 --rc geninfo_unexecuted_blocks=1 00:09:39.364 00:09:39.364 ' 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:39.364 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # xtrace_disable 00:09:39.364 08:06:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@131 -- # pci_devs=() 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@131 -- # local -a pci_devs 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@132 -- # pci_net_devs=() 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@133 -- # pci_drivers=() 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@133 -- # local -A pci_drivers 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@135 -- # net_devs=() 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@135 -- # local -ga net_devs 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@136 -- # e810=() 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@136 -- # local -ga e810 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@137 -- # x722=() 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@137 -- # local -ga x722 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@138 -- # mlx=() 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@138 -- # local -ga mlx 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:45.935 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:45.935 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:45.935 Found net devices under 0000:86:00.0: cvl_0_0 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:45.935 Found net devices under 0000:86:00.1: cvl_0_1 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # is_hw=yes 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@247 -- # create_target_ns 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:45.935 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:09:45.936 08:06:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:09:45.936 10.0.0.1 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:09:45.936 10.0.0.2 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 1 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:09:45.936 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.936 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.506 ms 00:09:45.936 00:09:45.936 --- 10.0.0.1 ping statistics --- 00:09:45.936 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.936 rtt min/avg/max/mdev = 0.506/0.506/0.506/0.000 ms 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:45.936 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:09:45.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:09:45.937 00:09:45.937 --- 10.0.0.2 ping statistics --- 00:09:45.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.937 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair++ )) 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # return 0 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # return 1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev= 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@160 -- # return 0 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # return 1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev= 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@160 -- # return 0 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:09:45.937 ' 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=1549567 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 1549567 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1549567 ']' 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.937 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.938 [2024-11-20 08:06:59.413218] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:09:45.938 [2024-11-20 08:06:59.413268] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.938 [2024-11-20 08:06:59.493363] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.938 [2024-11-20 08:06:59.533040] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.938 [2024-11-20 08:06:59.533075] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.938 [2024-11-20 08:06:59.533083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.938 [2024-11-20 08:06:59.533089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.938 [2024-11-20 08:06:59.533094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.938 [2024-11-20 08:06:59.533670] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:45.938 [2024-11-20 08:06:59.844545] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:45.938 ************************************ 00:09:45.938 START TEST lvs_grow_clean 00:09:45.938 ************************************ 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:45.938 08:06:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:46.197 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:09:46.197 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:09:46.455 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=743f1bf1-7bfd-4206-9af9-75b0fdc11a82 00:09:46.455 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 743f1bf1-7bfd-4206-9af9-75b0fdc11a82 00:09:46.455 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:09:46.714 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:09:46.714 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:09:46.714 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 743f1bf1-7bfd-4206-9af9-75b0fdc11a82 lvol 150 00:09:46.973 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=7ff58220-2b99-4d51-bb75-800ac795f606 00:09:46.973 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:46.973 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:09:46.973 [2024-11-20 08:07:00.910137] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:09:46.973 [2024-11-20 08:07:00.910187] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:09:46.973 true 00:09:46.973 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 743f1bf1-7bfd-4206-9af9-75b0fdc11a82 00:09:46.973 08:07:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:09:47.232 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:09:47.232 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:09:47.491 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7ff58220-2b99-4d51-bb75-800ac795f606 00:09:47.491 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:09:47.750 [2024-11-20 08:07:01.620293] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.750 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:48.009 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1550065 00:09:48.009 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:48.009 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:09:48.009 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1550065 /var/tmp/bdevperf.sock 00:09:48.009 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1550065 ']' 00:09:48.009 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:48.009 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:48.009 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:48.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:48.009 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:48.009 08:07:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:09:48.009 [2024-11-20 08:07:01.847045] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:09:48.009 [2024-11-20 08:07:01.847088] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1550065 ] 00:09:48.009 [2024-11-20 08:07:01.919747] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:48.009 [2024-11-20 08:07:01.959415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:48.268 08:07:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:48.268 08:07:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:09:48.268 08:07:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:09:48.526 Nvme0n1 00:09:48.526 08:07:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:09:48.785 [ 00:09:48.785 { 00:09:48.785 "name": "Nvme0n1", 00:09:48.785 "aliases": [ 00:09:48.785 "7ff58220-2b99-4d51-bb75-800ac795f606" 00:09:48.785 ], 00:09:48.785 "product_name": "NVMe disk", 00:09:48.785 "block_size": 4096, 00:09:48.785 "num_blocks": 38912, 00:09:48.785 "uuid": "7ff58220-2b99-4d51-bb75-800ac795f606", 00:09:48.785 "numa_id": 1, 00:09:48.785 "assigned_rate_limits": { 00:09:48.785 "rw_ios_per_sec": 0, 00:09:48.785 "rw_mbytes_per_sec": 0, 00:09:48.785 "r_mbytes_per_sec": 0, 00:09:48.785 "w_mbytes_per_sec": 0 00:09:48.785 }, 00:09:48.785 "claimed": false, 00:09:48.785 "zoned": false, 00:09:48.785 "supported_io_types": { 00:09:48.785 "read": true, 00:09:48.785 "write": true, 00:09:48.785 "unmap": true, 00:09:48.785 "flush": true, 00:09:48.785 "reset": true, 00:09:48.785 "nvme_admin": true, 00:09:48.785 "nvme_io": true, 00:09:48.785 "nvme_io_md": false, 00:09:48.785 "write_zeroes": true, 00:09:48.785 "zcopy": false, 00:09:48.785 "get_zone_info": false, 00:09:48.785 "zone_management": false, 00:09:48.785 "zone_append": false, 00:09:48.785 "compare": true, 00:09:48.785 "compare_and_write": true, 00:09:48.785 "abort": true, 00:09:48.785 "seek_hole": false, 00:09:48.785 "seek_data": false, 00:09:48.785 "copy": true, 00:09:48.785 "nvme_iov_md": false 00:09:48.785 }, 00:09:48.785 "memory_domains": [ 00:09:48.785 { 00:09:48.785 "dma_device_id": "system", 00:09:48.785 "dma_device_type": 1 00:09:48.785 } 00:09:48.785 ], 00:09:48.785 "driver_specific": { 00:09:48.785 "nvme": [ 00:09:48.785 { 00:09:48.785 "trid": { 00:09:48.785 "trtype": "TCP", 00:09:48.785 "adrfam": "IPv4", 00:09:48.785 "traddr": "10.0.0.2", 00:09:48.785 "trsvcid": "4420", 00:09:48.785 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:09:48.785 }, 00:09:48.785 "ctrlr_data": { 00:09:48.785 "cntlid": 1, 00:09:48.785 "vendor_id": "0x8086", 00:09:48.785 "model_number": "SPDK bdev Controller", 00:09:48.785 "serial_number": "SPDK0", 00:09:48.785 "firmware_revision": "25.01", 00:09:48.785 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:09:48.785 "oacs": { 00:09:48.785 "security": 0, 00:09:48.785 "format": 0, 00:09:48.785 "firmware": 0, 00:09:48.785 "ns_manage": 0 00:09:48.785 }, 00:09:48.785 "multi_ctrlr": true, 00:09:48.785 "ana_reporting": false 00:09:48.785 }, 00:09:48.785 "vs": { 00:09:48.785 "nvme_version": "1.3" 00:09:48.785 }, 00:09:48.785 "ns_data": { 00:09:48.785 "id": 1, 00:09:48.785 "can_share": true 00:09:48.785 } 00:09:48.785 } 00:09:48.785 ], 00:09:48.785 "mp_policy": "active_passive" 00:09:48.785 } 00:09:48.785 } 00:09:48.785 ] 00:09:48.785 08:07:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1550126 00:09:48.785 08:07:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:48.786 08:07:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:09:48.786 Running I/O for 10 seconds... 00:09:49.722 Latency(us) 00:09:49.722 [2024-11-20T07:07:03.750Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:49.722 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:49.722 Nvme0n1 : 1.00 23236.00 90.77 0.00 0.00 0.00 0.00 0.00 00:09:49.722 [2024-11-20T07:07:03.750Z] =================================================================================================================== 00:09:49.722 [2024-11-20T07:07:03.750Z] Total : 23236.00 90.77 0.00 0.00 0.00 0.00 0.00 00:09:49.722 00:09:50.659 08:07:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 743f1bf1-7bfd-4206-9af9-75b0fdc11a82 00:09:50.918 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:50.918 Nvme0n1 : 2.00 23346.00 91.20 0.00 0.00 0.00 0.00 0.00 00:09:50.918 [2024-11-20T07:07:04.946Z] =================================================================================================================== 00:09:50.918 [2024-11-20T07:07:04.946Z] Total : 23346.00 91.20 0.00 0.00 0.00 0.00 0.00 00:09:50.918 00:09:50.918 true 00:09:50.918 08:07:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 743f1bf1-7bfd-4206-9af9-75b0fdc11a82 00:09:50.918 08:07:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:09:51.176 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:09:51.176 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:09:51.176 08:07:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1550126 00:09:51.744 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:51.744 Nvme0n1 : 3.00 23421.33 91.49 0.00 0.00 0.00 0.00 0.00 00:09:51.744 [2024-11-20T07:07:05.772Z] =================================================================================================================== 00:09:51.744 [2024-11-20T07:07:05.772Z] Total : 23421.33 91.49 0.00 0.00 0.00 0.00 0.00 00:09:51.744 00:09:53.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:53.122 Nvme0n1 : 4.00 23498.00 91.79 0.00 0.00 0.00 0.00 0.00 00:09:53.122 [2024-11-20T07:07:07.150Z] =================================================================================================================== 00:09:53.122 [2024-11-20T07:07:07.150Z] Total : 23498.00 91.79 0.00 0.00 0.00 0.00 0.00 00:09:53.122 00:09:54.058 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.058 Nvme0n1 : 5.00 23543.20 91.97 0.00 0.00 0.00 0.00 0.00 00:09:54.058 [2024-11-20T07:07:08.086Z] =================================================================================================================== 00:09:54.058 [2024-11-20T07:07:08.086Z] Total : 23543.20 91.97 0.00 0.00 0.00 0.00 0.00 00:09:54.058 00:09:54.995 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:54.995 Nvme0n1 : 6.00 23600.17 92.19 0.00 0.00 0.00 0.00 0.00 00:09:54.995 [2024-11-20T07:07:09.023Z] =================================================================================================================== 00:09:54.995 [2024-11-20T07:07:09.023Z] Total : 23600.17 92.19 0.00 0.00 0.00 0.00 0.00 00:09:54.995 00:09:55.933 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:55.933 Nvme0n1 : 7.00 23621.86 92.27 0.00 0.00 0.00 0.00 0.00 00:09:55.933 [2024-11-20T07:07:09.961Z] =================================================================================================================== 00:09:55.933 [2024-11-20T07:07:09.961Z] Total : 23621.86 92.27 0.00 0.00 0.00 0.00 0.00 00:09:55.933 00:09:56.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:56.870 Nvme0n1 : 8.00 23582.38 92.12 0.00 0.00 0.00 0.00 0.00 00:09:56.870 [2024-11-20T07:07:10.898Z] =================================================================================================================== 00:09:56.870 [2024-11-20T07:07:10.898Z] Total : 23582.38 92.12 0.00 0.00 0.00 0.00 0.00 00:09:56.870 00:09:57.807 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:57.807 Nvme0n1 : 9.00 23598.78 92.18 0.00 0.00 0.00 0.00 0.00 00:09:57.807 [2024-11-20T07:07:11.835Z] =================================================================================================================== 00:09:57.807 [2024-11-20T07:07:11.835Z] Total : 23598.78 92.18 0.00 0.00 0.00 0.00 0.00 00:09:57.807 00:09:58.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.749 Nvme0n1 : 10.00 23627.70 92.30 0.00 0.00 0.00 0.00 0.00 00:09:58.749 [2024-11-20T07:07:12.777Z] =================================================================================================================== 00:09:58.749 [2024-11-20T07:07:12.777Z] Total : 23627.70 92.30 0.00 0.00 0.00 0.00 0.00 00:09:58.749 00:09:58.749 00:09:58.749 Latency(us) 00:09:58.749 [2024-11-20T07:07:12.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.749 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:58.749 Nvme0n1 : 10.00 23622.13 92.27 0.00 0.00 5415.14 3198.78 11546.82 00:09:58.749 [2024-11-20T07:07:12.777Z] =================================================================================================================== 00:09:58.749 [2024-11-20T07:07:12.777Z] Total : 23622.13 92.27 0.00 0.00 5415.14 3198.78 11546.82 00:09:58.749 { 00:09:58.749 "results": [ 00:09:58.749 { 00:09:58.749 "job": "Nvme0n1", 00:09:58.749 "core_mask": "0x2", 00:09:58.749 "workload": "randwrite", 00:09:58.749 "status": "finished", 00:09:58.749 "queue_depth": 128, 00:09:58.749 "io_size": 4096, 00:09:58.749 "runtime": 10.002401, 00:09:58.749 "iops": 23622.12832698869, 00:09:58.750 "mibps": 92.27393877729958, 00:09:58.750 "io_failed": 0, 00:09:58.750 "io_timeout": 0, 00:09:58.750 "avg_latency_us": 5415.141888695278, 00:09:58.750 "min_latency_us": 3198.7809523809524, 00:09:58.750 "max_latency_us": 11546.819047619048 00:09:58.750 } 00:09:58.750 ], 00:09:58.750 "core_count": 1 00:09:58.750 } 00:09:58.750 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1550065 00:09:58.750 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1550065 ']' 00:09:58.750 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1550065 00:09:58.750 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:09:58.750 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:59.009 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1550065 00:09:59.009 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:59.009 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:59.009 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1550065' 00:09:59.009 killing process with pid 1550065 00:09:59.009 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1550065 00:09:59.009 Received shutdown signal, test time was about 10.000000 seconds 00:09:59.009 00:09:59.009 Latency(us) 00:09:59.009 [2024-11-20T07:07:13.037Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:59.009 [2024-11-20T07:07:13.037Z] =================================================================================================================== 00:09:59.009 [2024-11-20T07:07:13.037Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:59.009 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1550065 00:09:59.009 08:07:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:59.268 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:59.526 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:59.527 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 743f1bf1-7bfd-4206-9af9-75b0fdc11a82 00:09:59.786 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:59.786 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:09:59.786 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:59.786 [2024-11-20 08:07:13.740898] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:59.786 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 743f1bf1-7bfd-4206-9af9-75b0fdc11a82 00:09:59.786 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:09:59.786 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 743f1bf1-7bfd-4206-9af9-75b0fdc11a82 00:09:59.786 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:59.786 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:59.786 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:59.786 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:59.786 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:59.786 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:09:59.786 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:59.786 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:59.786 08:07:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 743f1bf1-7bfd-4206-9af9-75b0fdc11a82 00:10:00.045 request: 00:10:00.045 { 00:10:00.045 "uuid": "743f1bf1-7bfd-4206-9af9-75b0fdc11a82", 00:10:00.045 "method": "bdev_lvol_get_lvstores", 00:10:00.045 "req_id": 1 00:10:00.045 } 00:10:00.045 Got JSON-RPC error response 00:10:00.045 response: 00:10:00.045 { 00:10:00.045 "code": -19, 00:10:00.045 "message": "No such device" 00:10:00.045 } 00:10:00.045 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:10:00.045 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:00.045 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:00.045 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:00.045 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:00.304 aio_bdev 00:10:00.304 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7ff58220-2b99-4d51-bb75-800ac795f606 00:10:00.304 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=7ff58220-2b99-4d51-bb75-800ac795f606 00:10:00.304 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:00.304 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:10:00.304 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:00.304 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:00.304 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:00.563 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7ff58220-2b99-4d51-bb75-800ac795f606 -t 2000 00:10:00.563 [ 00:10:00.563 { 00:10:00.563 "name": "7ff58220-2b99-4d51-bb75-800ac795f606", 00:10:00.563 "aliases": [ 00:10:00.563 "lvs/lvol" 00:10:00.563 ], 00:10:00.563 "product_name": "Logical Volume", 00:10:00.563 "block_size": 4096, 00:10:00.563 "num_blocks": 38912, 00:10:00.563 "uuid": "7ff58220-2b99-4d51-bb75-800ac795f606", 00:10:00.563 "assigned_rate_limits": { 00:10:00.563 "rw_ios_per_sec": 0, 00:10:00.563 "rw_mbytes_per_sec": 0, 00:10:00.563 "r_mbytes_per_sec": 0, 00:10:00.563 "w_mbytes_per_sec": 0 00:10:00.563 }, 00:10:00.563 "claimed": false, 00:10:00.563 "zoned": false, 00:10:00.563 "supported_io_types": { 00:10:00.563 "read": true, 00:10:00.563 "write": true, 00:10:00.563 "unmap": true, 00:10:00.563 "flush": false, 00:10:00.563 "reset": true, 00:10:00.563 "nvme_admin": false, 00:10:00.563 "nvme_io": false, 00:10:00.563 "nvme_io_md": false, 00:10:00.563 "write_zeroes": true, 00:10:00.563 "zcopy": false, 00:10:00.563 "get_zone_info": false, 00:10:00.563 "zone_management": false, 00:10:00.563 "zone_append": false, 00:10:00.563 "compare": false, 00:10:00.563 "compare_and_write": false, 00:10:00.564 "abort": false, 00:10:00.564 "seek_hole": true, 00:10:00.564 "seek_data": true, 00:10:00.564 "copy": false, 00:10:00.564 "nvme_iov_md": false 00:10:00.564 }, 00:10:00.564 "driver_specific": { 00:10:00.564 "lvol": { 00:10:00.564 "lvol_store_uuid": "743f1bf1-7bfd-4206-9af9-75b0fdc11a82", 00:10:00.564 "base_bdev": "aio_bdev", 00:10:00.564 "thin_provision": false, 00:10:00.564 "num_allocated_clusters": 38, 00:10:00.564 "snapshot": false, 00:10:00.564 "clone": false, 00:10:00.564 "esnap_clone": false 00:10:00.564 } 00:10:00.564 } 00:10:00.564 } 00:10:00.564 ] 00:10:00.823 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:10:00.823 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 743f1bf1-7bfd-4206-9af9-75b0fdc11a82 00:10:00.823 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:00.823 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:00.823 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 743f1bf1-7bfd-4206-9af9-75b0fdc11a82 00:10:00.823 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:01.083 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:01.083 08:07:14 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7ff58220-2b99-4d51-bb75-800ac795f606 00:10:01.343 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 743f1bf1-7bfd-4206-9af9-75b0fdc11a82 00:10:01.602 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:01.602 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:01.602 00:10:01.602 real 0m15.705s 00:10:01.602 user 0m15.240s 00:10:01.602 sys 0m1.475s 00:10:01.602 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:01.602 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:10:01.602 ************************************ 00:10:01.602 END TEST lvs_grow_clean 00:10:01.602 ************************************ 00:10:01.862 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:10:01.862 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:01.862 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.862 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:01.862 ************************************ 00:10:01.862 START TEST lvs_grow_dirty 00:10:01.862 ************************************ 00:10:01.862 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:10:01.862 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:10:01.862 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:10:01.862 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:10:01.862 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:10:01.862 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:10:01.862 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:10:01.862 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:01.862 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:01.862 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:02.121 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:10:02.121 08:07:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:10:02.121 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=441bee36-b24a-48ea-a51f-cc07c7509eac 00:10:02.121 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 441bee36-b24a-48ea-a51f-cc07c7509eac 00:10:02.121 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:10:02.379 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:10:02.379 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:10:02.379 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 441bee36-b24a-48ea-a51f-cc07c7509eac lvol 150 00:10:02.639 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=85363a49-9116-4796-b689-b76185212fe0 00:10:02.639 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:02.639 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:10:02.639 [2024-11-20 08:07:16.635086] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:10:02.639 [2024-11-20 08:07:16.635139] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:10:02.639 true 00:10:02.639 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 441bee36-b24a-48ea-a51f-cc07c7509eac 00:10:02.639 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:10:02.897 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:10:02.897 08:07:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:10:03.155 08:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 85363a49-9116-4796-b689-b76185212fe0 00:10:03.413 08:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:10:03.413 [2024-11-20 08:07:17.349215] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:03.413 08:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:03.672 08:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1552669 00:10:03.672 08:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:03.673 08:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:10:03.673 08:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1552669 /var/tmp/bdevperf.sock 00:10:03.673 08:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1552669 ']' 00:10:03.673 08:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:03.673 08:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:03.673 08:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:03.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:03.673 08:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:03.673 08:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:03.673 [2024-11-20 08:07:17.582262] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:10:03.673 [2024-11-20 08:07:17.582309] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1552669 ] 00:10:03.673 [2024-11-20 08:07:17.655879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:03.673 [2024-11-20 08:07:17.695200] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:03.931 08:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:03.931 08:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:03.931 08:07:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:10:04.189 Nvme0n1 00:10:04.189 08:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:10:04.449 [ 00:10:04.449 { 00:10:04.449 "name": "Nvme0n1", 00:10:04.449 "aliases": [ 00:10:04.449 "85363a49-9116-4796-b689-b76185212fe0" 00:10:04.449 ], 00:10:04.449 "product_name": "NVMe disk", 00:10:04.449 "block_size": 4096, 00:10:04.449 "num_blocks": 38912, 00:10:04.449 "uuid": "85363a49-9116-4796-b689-b76185212fe0", 00:10:04.449 "numa_id": 1, 00:10:04.449 "assigned_rate_limits": { 00:10:04.449 "rw_ios_per_sec": 0, 00:10:04.449 "rw_mbytes_per_sec": 0, 00:10:04.449 "r_mbytes_per_sec": 0, 00:10:04.449 "w_mbytes_per_sec": 0 00:10:04.449 }, 00:10:04.449 "claimed": false, 00:10:04.449 "zoned": false, 00:10:04.449 "supported_io_types": { 00:10:04.449 "read": true, 00:10:04.449 "write": true, 00:10:04.449 "unmap": true, 00:10:04.449 "flush": true, 00:10:04.449 "reset": true, 00:10:04.449 "nvme_admin": true, 00:10:04.449 "nvme_io": true, 00:10:04.449 "nvme_io_md": false, 00:10:04.449 "write_zeroes": true, 00:10:04.449 "zcopy": false, 00:10:04.449 "get_zone_info": false, 00:10:04.449 "zone_management": false, 00:10:04.449 "zone_append": false, 00:10:04.449 "compare": true, 00:10:04.449 "compare_and_write": true, 00:10:04.449 "abort": true, 00:10:04.449 "seek_hole": false, 00:10:04.449 "seek_data": false, 00:10:04.449 "copy": true, 00:10:04.449 "nvme_iov_md": false 00:10:04.449 }, 00:10:04.449 "memory_domains": [ 00:10:04.449 { 00:10:04.449 "dma_device_id": "system", 00:10:04.449 "dma_device_type": 1 00:10:04.449 } 00:10:04.449 ], 00:10:04.449 "driver_specific": { 00:10:04.449 "nvme": [ 00:10:04.449 { 00:10:04.449 "trid": { 00:10:04.449 "trtype": "TCP", 00:10:04.449 "adrfam": "IPv4", 00:10:04.449 "traddr": "10.0.0.2", 00:10:04.449 "trsvcid": "4420", 00:10:04.449 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:10:04.449 }, 00:10:04.449 "ctrlr_data": { 00:10:04.449 "cntlid": 1, 00:10:04.449 "vendor_id": "0x8086", 00:10:04.449 "model_number": "SPDK bdev Controller", 00:10:04.449 "serial_number": "SPDK0", 00:10:04.449 "firmware_revision": "25.01", 00:10:04.449 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:10:04.449 "oacs": { 00:10:04.449 "security": 0, 00:10:04.449 "format": 0, 00:10:04.449 "firmware": 0, 00:10:04.449 "ns_manage": 0 00:10:04.449 }, 00:10:04.449 "multi_ctrlr": true, 00:10:04.449 "ana_reporting": false 00:10:04.449 }, 00:10:04.449 "vs": { 00:10:04.449 "nvme_version": "1.3" 00:10:04.449 }, 00:10:04.449 "ns_data": { 00:10:04.449 "id": 1, 00:10:04.449 "can_share": true 00:10:04.449 } 00:10:04.449 } 00:10:04.449 ], 00:10:04.449 "mp_policy": "active_passive" 00:10:04.449 } 00:10:04.449 } 00:10:04.449 ] 00:10:04.449 08:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1552897 00:10:04.449 08:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:10:04.449 08:07:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:04.449 Running I/O for 10 seconds... 00:10:05.386 Latency(us) 00:10:05.386 [2024-11-20T07:07:19.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:05.386 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:05.386 Nvme0n1 : 1.00 22325.00 87.21 0.00 0.00 0.00 0.00 0.00 00:10:05.386 [2024-11-20T07:07:19.414Z] =================================================================================================================== 00:10:05.386 [2024-11-20T07:07:19.414Z] Total : 22325.00 87.21 0.00 0.00 0.00 0.00 0.00 00:10:05.386 00:10:06.322 08:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 441bee36-b24a-48ea-a51f-cc07c7509eac 00:10:06.580 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:06.580 Nvme0n1 : 2.00 22362.50 87.35 0.00 0.00 0.00 0.00 0.00 00:10:06.580 [2024-11-20T07:07:20.608Z] =================================================================================================================== 00:10:06.580 [2024-11-20T07:07:20.608Z] Total : 22362.50 87.35 0.00 0.00 0.00 0.00 0.00 00:10:06.580 00:10:06.580 true 00:10:06.580 08:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 441bee36-b24a-48ea-a51f-cc07c7509eac 00:10:06.580 08:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:10:06.839 08:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:10:06.839 08:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:10:06.839 08:07:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1552897 00:10:07.465 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:07.465 Nvme0n1 : 3.00 22409.67 87.54 0.00 0.00 0.00 0.00 0.00 00:10:07.465 [2024-11-20T07:07:21.493Z] =================================================================================================================== 00:10:07.465 [2024-11-20T07:07:21.493Z] Total : 22409.67 87.54 0.00 0.00 0.00 0.00 0.00 00:10:07.465 00:10:08.431 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:08.431 Nvme0n1 : 4.00 22479.25 87.81 0.00 0.00 0.00 0.00 0.00 00:10:08.431 [2024-11-20T07:07:22.459Z] =================================================================================================================== 00:10:08.431 [2024-11-20T07:07:22.459Z] Total : 22479.25 87.81 0.00 0.00 0.00 0.00 0.00 00:10:08.431 00:10:09.369 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:09.369 Nvme0n1 : 5.00 22543.40 88.06 0.00 0.00 0.00 0.00 0.00 00:10:09.369 [2024-11-20T07:07:23.397Z] =================================================================================================================== 00:10:09.369 [2024-11-20T07:07:23.397Z] Total : 22543.40 88.06 0.00 0.00 0.00 0.00 0.00 00:10:09.369 00:10:10.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:10.745 Nvme0n1 : 6.00 22587.50 88.23 0.00 0.00 0.00 0.00 0.00 00:10:10.745 [2024-11-20T07:07:24.773Z] =================================================================================================================== 00:10:10.745 [2024-11-20T07:07:24.773Z] Total : 22587.50 88.23 0.00 0.00 0.00 0.00 0.00 00:10:10.745 00:10:11.682 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:11.682 Nvme0n1 : 7.00 22621.29 88.36 0.00 0.00 0.00 0.00 0.00 00:10:11.682 [2024-11-20T07:07:25.710Z] =================================================================================================================== 00:10:11.682 [2024-11-20T07:07:25.710Z] Total : 22621.29 88.36 0.00 0.00 0.00 0.00 0.00 00:10:11.682 00:10:12.618 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:12.618 Nvme0n1 : 8.00 22646.62 88.46 0.00 0.00 0.00 0.00 0.00 00:10:12.618 [2024-11-20T07:07:26.646Z] =================================================================================================================== 00:10:12.618 [2024-11-20T07:07:26.646Z] Total : 22646.62 88.46 0.00 0.00 0.00 0.00 0.00 00:10:12.618 00:10:13.556 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:13.556 Nvme0n1 : 9.00 22671.67 88.56 0.00 0.00 0.00 0.00 0.00 00:10:13.556 [2024-11-20T07:07:27.584Z] =================================================================================================================== 00:10:13.556 [2024-11-20T07:07:27.584Z] Total : 22671.67 88.56 0.00 0.00 0.00 0.00 0.00 00:10:13.556 00:10:14.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.491 Nvme0n1 : 10.00 22690.90 88.64 0.00 0.00 0.00 0.00 0.00 00:10:14.491 [2024-11-20T07:07:28.519Z] =================================================================================================================== 00:10:14.491 [2024-11-20T07:07:28.519Z] Total : 22690.90 88.64 0.00 0.00 0.00 0.00 0.00 00:10:14.491 00:10:14.491 00:10:14.491 Latency(us) 00:10:14.491 [2024-11-20T07:07:28.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.491 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:10:14.491 Nvme0n1 : 10.01 22691.69 88.64 0.00 0.00 5636.84 4337.86 12108.56 00:10:14.491 [2024-11-20T07:07:28.519Z] =================================================================================================================== 00:10:14.491 [2024-11-20T07:07:28.519Z] Total : 22691.69 88.64 0.00 0.00 5636.84 4337.86 12108.56 00:10:14.491 { 00:10:14.491 "results": [ 00:10:14.491 { 00:10:14.491 "job": "Nvme0n1", 00:10:14.491 "core_mask": "0x2", 00:10:14.491 "workload": "randwrite", 00:10:14.491 "status": "finished", 00:10:14.491 "queue_depth": 128, 00:10:14.491 "io_size": 4096, 00:10:14.491 "runtime": 10.005294, 00:10:14.491 "iops": 22691.68702089114, 00:10:14.491 "mibps": 88.63940242535601, 00:10:14.491 "io_failed": 0, 00:10:14.491 "io_timeout": 0, 00:10:14.491 "avg_latency_us": 5636.84428348054, 00:10:14.491 "min_latency_us": 4337.8590476190475, 00:10:14.491 "max_latency_us": 12108.55619047619 00:10:14.491 } 00:10:14.491 ], 00:10:14.491 "core_count": 1 00:10:14.491 } 00:10:14.491 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1552669 00:10:14.491 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1552669 ']' 00:10:14.491 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1552669 00:10:14.491 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:10:14.491 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.491 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1552669 00:10:14.491 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:14.491 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:14.491 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1552669' 00:10:14.491 killing process with pid 1552669 00:10:14.491 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1552669 00:10:14.491 Received shutdown signal, test time was about 10.000000 seconds 00:10:14.491 00:10:14.491 Latency(us) 00:10:14.491 [2024-11-20T07:07:28.519Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:14.491 [2024-11-20T07:07:28.519Z] =================================================================================================================== 00:10:14.491 [2024-11-20T07:07:28.519Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:14.492 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1552669 00:10:14.750 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:15.010 08:07:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:10:15.010 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 441bee36-b24a-48ea-a51f-cc07c7509eac 00:10:15.010 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:10:15.269 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:10:15.269 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:10:15.269 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1549567 00:10:15.269 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1549567 00:10:15.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1549567 Killed "${NVMF_APP[@]}" "$@" 00:10:15.269 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:10:15.269 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:10:15.269 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:15.269 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:15.269 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:15.269 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=1554713 00:10:15.269 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 1554713 00:10:15.269 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:10:15.269 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1554713 ']' 00:10:15.269 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:15.269 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:15.269 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:15.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:15.269 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:15.269 08:07:29 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:15.529 [2024-11-20 08:07:29.293386] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:10:15.529 [2024-11-20 08:07:29.293436] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.529 [2024-11-20 08:07:29.373294] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:15.529 [2024-11-20 08:07:29.413353] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.529 [2024-11-20 08:07:29.413389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.529 [2024-11-20 08:07:29.413396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.529 [2024-11-20 08:07:29.413403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.529 [2024-11-20 08:07:29.413409] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.529 [2024-11-20 08:07:29.414001] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.470 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:16.470 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:10:16.470 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:16.470 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:16.470 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:16.470 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.470 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:16.470 [2024-11-20 08:07:30.340973] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:10:16.470 [2024-11-20 08:07:30.341069] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:10:16.470 [2024-11-20 08:07:30.341096] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:10:16.470 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:10:16.470 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 85363a49-9116-4796-b689-b76185212fe0 00:10:16.470 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=85363a49-9116-4796-b689-b76185212fe0 00:10:16.470 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:16.470 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:16.470 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:16.470 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:16.470 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:16.729 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 85363a49-9116-4796-b689-b76185212fe0 -t 2000 00:10:16.729 [ 00:10:16.729 { 00:10:16.729 "name": "85363a49-9116-4796-b689-b76185212fe0", 00:10:16.729 "aliases": [ 00:10:16.729 "lvs/lvol" 00:10:16.729 ], 00:10:16.729 "product_name": "Logical Volume", 00:10:16.729 "block_size": 4096, 00:10:16.729 "num_blocks": 38912, 00:10:16.729 "uuid": "85363a49-9116-4796-b689-b76185212fe0", 00:10:16.729 "assigned_rate_limits": { 00:10:16.729 "rw_ios_per_sec": 0, 00:10:16.729 "rw_mbytes_per_sec": 0, 00:10:16.729 "r_mbytes_per_sec": 0, 00:10:16.729 "w_mbytes_per_sec": 0 00:10:16.729 }, 00:10:16.729 "claimed": false, 00:10:16.729 "zoned": false, 00:10:16.729 "supported_io_types": { 00:10:16.729 "read": true, 00:10:16.729 "write": true, 00:10:16.729 "unmap": true, 00:10:16.729 "flush": false, 00:10:16.729 "reset": true, 00:10:16.729 "nvme_admin": false, 00:10:16.729 "nvme_io": false, 00:10:16.729 "nvme_io_md": false, 00:10:16.729 "write_zeroes": true, 00:10:16.729 "zcopy": false, 00:10:16.729 "get_zone_info": false, 00:10:16.729 "zone_management": false, 00:10:16.729 "zone_append": false, 00:10:16.729 "compare": false, 00:10:16.729 "compare_and_write": false, 00:10:16.729 "abort": false, 00:10:16.729 "seek_hole": true, 00:10:16.729 "seek_data": true, 00:10:16.729 "copy": false, 00:10:16.729 "nvme_iov_md": false 00:10:16.729 }, 00:10:16.729 "driver_specific": { 00:10:16.729 "lvol": { 00:10:16.729 "lvol_store_uuid": "441bee36-b24a-48ea-a51f-cc07c7509eac", 00:10:16.729 "base_bdev": "aio_bdev", 00:10:16.729 "thin_provision": false, 00:10:16.729 "num_allocated_clusters": 38, 00:10:16.729 "snapshot": false, 00:10:16.729 "clone": false, 00:10:16.729 "esnap_clone": false 00:10:16.729 } 00:10:16.729 } 00:10:16.729 } 00:10:16.729 ] 00:10:16.729 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:16.729 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 441bee36-b24a-48ea-a51f-cc07c7509eac 00:10:16.729 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:10:16.988 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:10:16.989 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 441bee36-b24a-48ea-a51f-cc07c7509eac 00:10:16.989 08:07:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:10:17.248 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:10:17.248 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:17.507 [2024-11-20 08:07:31.277684] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:10:17.507 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 441bee36-b24a-48ea-a51f-cc07c7509eac 00:10:17.508 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:10:17.508 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 441bee36-b24a-48ea-a51f-cc07c7509eac 00:10:17.508 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:17.508 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:17.508 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:17.508 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:17.508 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:17.508 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:10:17.508 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:17.508 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:10:17.508 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 441bee36-b24a-48ea-a51f-cc07c7509eac 00:10:17.508 request: 00:10:17.508 { 00:10:17.508 "uuid": "441bee36-b24a-48ea-a51f-cc07c7509eac", 00:10:17.508 "method": "bdev_lvol_get_lvstores", 00:10:17.508 "req_id": 1 00:10:17.508 } 00:10:17.508 Got JSON-RPC error response 00:10:17.508 response: 00:10:17.508 { 00:10:17.508 "code": -19, 00:10:17.508 "message": "No such device" 00:10:17.508 } 00:10:17.508 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:10:17.508 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:10:17.508 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:10:17.508 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:10:17.508 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:10:17.767 aio_bdev 00:10:17.767 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 85363a49-9116-4796-b689-b76185212fe0 00:10:17.767 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=85363a49-9116-4796-b689-b76185212fe0 00:10:17.767 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:10:17.767 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:10:17.767 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:10:17.767 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:10:17.767 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:10:18.026 08:07:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 85363a49-9116-4796-b689-b76185212fe0 -t 2000 00:10:18.026 [ 00:10:18.026 { 00:10:18.026 "name": "85363a49-9116-4796-b689-b76185212fe0", 00:10:18.026 "aliases": [ 00:10:18.026 "lvs/lvol" 00:10:18.026 ], 00:10:18.026 "product_name": "Logical Volume", 00:10:18.026 "block_size": 4096, 00:10:18.026 "num_blocks": 38912, 00:10:18.026 "uuid": "85363a49-9116-4796-b689-b76185212fe0", 00:10:18.026 "assigned_rate_limits": { 00:10:18.026 "rw_ios_per_sec": 0, 00:10:18.026 "rw_mbytes_per_sec": 0, 00:10:18.026 "r_mbytes_per_sec": 0, 00:10:18.026 "w_mbytes_per_sec": 0 00:10:18.026 }, 00:10:18.026 "claimed": false, 00:10:18.026 "zoned": false, 00:10:18.026 "supported_io_types": { 00:10:18.026 "read": true, 00:10:18.026 "write": true, 00:10:18.026 "unmap": true, 00:10:18.026 "flush": false, 00:10:18.026 "reset": true, 00:10:18.026 "nvme_admin": false, 00:10:18.026 "nvme_io": false, 00:10:18.026 "nvme_io_md": false, 00:10:18.026 "write_zeroes": true, 00:10:18.026 "zcopy": false, 00:10:18.026 "get_zone_info": false, 00:10:18.026 "zone_management": false, 00:10:18.026 "zone_append": false, 00:10:18.026 "compare": false, 00:10:18.026 "compare_and_write": false, 00:10:18.026 "abort": false, 00:10:18.026 "seek_hole": true, 00:10:18.026 "seek_data": true, 00:10:18.026 "copy": false, 00:10:18.026 "nvme_iov_md": false 00:10:18.026 }, 00:10:18.026 "driver_specific": { 00:10:18.026 "lvol": { 00:10:18.026 "lvol_store_uuid": "441bee36-b24a-48ea-a51f-cc07c7509eac", 00:10:18.026 "base_bdev": "aio_bdev", 00:10:18.026 "thin_provision": false, 00:10:18.026 "num_allocated_clusters": 38, 00:10:18.026 "snapshot": false, 00:10:18.026 "clone": false, 00:10:18.026 "esnap_clone": false 00:10:18.026 } 00:10:18.026 } 00:10:18.026 } 00:10:18.026 ] 00:10:18.285 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:10:18.285 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 441bee36-b24a-48ea-a51f-cc07c7509eac 00:10:18.285 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:10:18.285 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:10:18.285 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 441bee36-b24a-48ea-a51f-cc07c7509eac 00:10:18.285 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:10:18.544 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:10:18.544 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 85363a49-9116-4796-b689-b76185212fe0 00:10:18.802 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 441bee36-b24a-48ea-a51f-cc07c7509eac 00:10:18.802 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:10:19.061 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:10:19.061 00:10:19.061 real 0m17.311s 00:10:19.061 user 0m43.301s 00:10:19.061 sys 0m4.097s 00:10:19.061 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:19.061 08:07:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:10:19.061 ************************************ 00:10:19.061 END TEST lvs_grow_dirty 00:10:19.061 ************************************ 00:10:19.061 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:10:19.061 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:10:19.061 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:10:19.061 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:10:19.061 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:10:19.061 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:10:19.061 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:10:19.061 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:10:19.061 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:10:19.061 nvmf_trace.0 00:10:19.061 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:10:19.061 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:10:19.061 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:19.061 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:10:19.321 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:10:19.321 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:10:19.321 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:19.321 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:10:19.321 rmmod nvme_tcp 00:10:19.321 rmmod nvme_fabrics 00:10:19.321 rmmod nvme_keyring 00:10:19.321 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:19.321 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:10:19.321 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:10:19.321 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 1554713 ']' 00:10:19.321 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 1554713 00:10:19.321 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1554713 ']' 00:10:19.321 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1554713 00:10:19.321 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:10:19.321 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:19.321 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1554713 00:10:19.321 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:19.321 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:19.321 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1554713' 00:10:19.321 killing process with pid 1554713 00:10:19.321 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1554713 00:10:19.321 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1554713 00:10:19.580 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:19.580 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:10:19.580 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@254 -- # local dev 00:10:19.580 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # remove_target_ns 00:10:19.580 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:19.580 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:19.580 08:07:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # delete_main_bridge 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # return 0 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@274 -- # iptr 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-save 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-restore 00:10:21.486 00:10:21.486 real 0m42.443s 00:10:21.486 user 1m4.831s 00:10:21.486 sys 0m10.555s 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:10:21.486 ************************************ 00:10:21.486 END TEST nvmf_lvs_grow 00:10:21.486 ************************************ 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:21.486 08:07:35 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:21.745 ************************************ 00:10:21.745 START TEST nvmf_bdev_io_wait 00:10:21.745 ************************************ 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:10:21.745 * Looking for test storage... 00:10:21.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:21.745 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:21.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.746 --rc genhtml_branch_coverage=1 00:10:21.746 --rc genhtml_function_coverage=1 00:10:21.746 --rc genhtml_legend=1 00:10:21.746 --rc geninfo_all_blocks=1 00:10:21.746 --rc geninfo_unexecuted_blocks=1 00:10:21.746 00:10:21.746 ' 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:21.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.746 --rc genhtml_branch_coverage=1 00:10:21.746 --rc genhtml_function_coverage=1 00:10:21.746 --rc genhtml_legend=1 00:10:21.746 --rc geninfo_all_blocks=1 00:10:21.746 --rc geninfo_unexecuted_blocks=1 00:10:21.746 00:10:21.746 ' 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:21.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.746 --rc genhtml_branch_coverage=1 00:10:21.746 --rc genhtml_function_coverage=1 00:10:21.746 --rc genhtml_legend=1 00:10:21.746 --rc geninfo_all_blocks=1 00:10:21.746 --rc geninfo_unexecuted_blocks=1 00:10:21.746 00:10:21.746 ' 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:21.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:21.746 --rc genhtml_branch_coverage=1 00:10:21.746 --rc genhtml_function_coverage=1 00:10:21.746 --rc genhtml_legend=1 00:10:21.746 --rc geninfo_all_blocks=1 00:10:21.746 --rc geninfo_unexecuted_blocks=1 00:10:21.746 00:10:21.746 ' 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:21.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # xtrace_disable 00:10:21.746 08:07:35 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # pci_devs=() 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # local -a pci_devs 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # pci_net_devs=() 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # pci_drivers=() 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # local -A pci_drivers 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # net_devs=() 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # local -ga net_devs 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # e810=() 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # local -ga e810 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # x722=() 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # local -ga x722 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # mlx=() 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # local -ga mlx 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:10:28.312 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:28.313 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:28.313 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:28.313 Found net devices under 0000:86:00.0: cvl_0_0 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:28.313 Found net devices under 0000:86:00.1: cvl_0_1 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # is_hw=yes 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@247 -- # create_target_ns 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:10:28.313 10.0.0.1 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:10:28.313 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:10:28.313 10.0.0.2 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 1 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:28.314 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.314 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.336 ms 00:10:28.314 00:10:28.314 --- 10.0.0.1 ping statistics --- 00:10:28.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.314 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:10:28.314 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.314 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:10:28.314 00:10:28.314 --- 10.0.0.2 ping statistics --- 00:10:28.314 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.314 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair++ )) 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # return 0 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator1 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # return 1 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev= 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@160 -- # return 0 00:10:28.314 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target1 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target1 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # return 1 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev= 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@160 -- # return 0 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:10:28.315 ' 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=1558995 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 1558995 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1558995 ']' 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:28.315 08:07:41 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.315 [2024-11-20 08:07:41.802212] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:10:28.315 [2024-11-20 08:07:41.802261] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.315 [2024-11-20 08:07:41.881793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:28.315 [2024-11-20 08:07:41.924996] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:28.315 [2024-11-20 08:07:41.925032] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:28.315 [2024-11-20 08:07:41.925039] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:28.315 [2024-11-20 08:07:41.925045] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:28.315 [2024-11-20 08:07:41.925050] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:28.315 [2024-11-20 08:07:41.926517] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.315 [2024-11-20 08:07:41.926625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:28.315 [2024-11-20 08:07:41.926735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.315 [2024-11-20 08:07:41.926735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.882 [2024-11-20 08:07:42.752073] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.882 Malloc0 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:28.882 [2024-11-20 08:07:42.807129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:28.882 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1559099 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1559101 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:10:28.883 { 00:10:28.883 "params": { 00:10:28.883 "name": "Nvme$subsystem", 00:10:28.883 "trtype": "$TEST_TRANSPORT", 00:10:28.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:28.883 "adrfam": "ipv4", 00:10:28.883 "trsvcid": "$NVMF_PORT", 00:10:28.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:28.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:28.883 "hdgst": ${hdgst:-false}, 00:10:28.883 "ddgst": ${ddgst:-false} 00:10:28.883 }, 00:10:28.883 "method": "bdev_nvme_attach_controller" 00:10:28.883 } 00:10:28.883 EOF 00:10:28.883 )") 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1559103 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:10:28.883 { 00:10:28.883 "params": { 00:10:28.883 "name": "Nvme$subsystem", 00:10:28.883 "trtype": "$TEST_TRANSPORT", 00:10:28.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:28.883 "adrfam": "ipv4", 00:10:28.883 "trsvcid": "$NVMF_PORT", 00:10:28.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:28.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:28.883 "hdgst": ${hdgst:-false}, 00:10:28.883 "ddgst": ${ddgst:-false} 00:10:28.883 }, 00:10:28.883 "method": "bdev_nvme_attach_controller" 00:10:28.883 } 00:10:28.883 EOF 00:10:28.883 )") 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1559106 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:10:28.883 { 00:10:28.883 "params": { 00:10:28.883 "name": "Nvme$subsystem", 00:10:28.883 "trtype": "$TEST_TRANSPORT", 00:10:28.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:28.883 "adrfam": "ipv4", 00:10:28.883 "trsvcid": "$NVMF_PORT", 00:10:28.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:28.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:28.883 "hdgst": ${hdgst:-false}, 00:10:28.883 "ddgst": ${ddgst:-false} 00:10:28.883 }, 00:10:28.883 "method": "bdev_nvme_attach_controller" 00:10:28.883 } 00:10:28.883 EOF 00:10:28.883 )") 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:10:28.883 { 00:10:28.883 "params": { 00:10:28.883 "name": "Nvme$subsystem", 00:10:28.883 "trtype": "$TEST_TRANSPORT", 00:10:28.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:28.883 "adrfam": "ipv4", 00:10:28.883 "trsvcid": "$NVMF_PORT", 00:10:28.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:28.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:28.883 "hdgst": ${hdgst:-false}, 00:10:28.883 "ddgst": ${ddgst:-false} 00:10:28.883 }, 00:10:28.883 "method": "bdev_nvme_attach_controller" 00:10:28.883 } 00:10:28.883 EOF 00:10:28.883 )") 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1559099 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:10:28.883 "params": { 00:10:28.883 "name": "Nvme1", 00:10:28.883 "trtype": "tcp", 00:10:28.883 "traddr": "10.0.0.2", 00:10:28.883 "adrfam": "ipv4", 00:10:28.883 "trsvcid": "4420", 00:10:28.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:28.883 "hdgst": false, 00:10:28.883 "ddgst": false 00:10:28.883 }, 00:10:28.883 "method": "bdev_nvme_attach_controller" 00:10:28.883 }' 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:10:28.883 "params": { 00:10:28.883 "name": "Nvme1", 00:10:28.883 "trtype": "tcp", 00:10:28.883 "traddr": "10.0.0.2", 00:10:28.883 "adrfam": "ipv4", 00:10:28.883 "trsvcid": "4420", 00:10:28.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:28.883 "hdgst": false, 00:10:28.883 "ddgst": false 00:10:28.883 }, 00:10:28.883 "method": "bdev_nvme_attach_controller" 00:10:28.883 }' 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:10:28.883 "params": { 00:10:28.883 "name": "Nvme1", 00:10:28.883 "trtype": "tcp", 00:10:28.883 "traddr": "10.0.0.2", 00:10:28.883 "adrfam": "ipv4", 00:10:28.883 "trsvcid": "4420", 00:10:28.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:28.883 "hdgst": false, 00:10:28.883 "ddgst": false 00:10:28.883 }, 00:10:28.883 "method": "bdev_nvme_attach_controller" 00:10:28.883 }' 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:10:28.883 08:07:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:10:28.883 "params": { 00:10:28.883 "name": "Nvme1", 00:10:28.883 "trtype": "tcp", 00:10:28.883 "traddr": "10.0.0.2", 00:10:28.883 "adrfam": "ipv4", 00:10:28.883 "trsvcid": "4420", 00:10:28.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:28.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:28.883 "hdgst": false, 00:10:28.883 "ddgst": false 00:10:28.883 }, 00:10:28.883 "method": "bdev_nvme_attach_controller" 00:10:28.883 }' 00:10:28.883 [2024-11-20 08:07:42.857033] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:10:28.883 [2024-11-20 08:07:42.857085] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:10:28.883 [2024-11-20 08:07:42.859664] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:10:28.883 [2024-11-20 08:07:42.859705] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:10:28.883 [2024-11-20 08:07:42.862300] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:10:28.883 [2024-11-20 08:07:42.862347] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:10:28.883 [2024-11-20 08:07:42.863681] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:10:28.883 [2024-11-20 08:07:42.863722] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:10:29.142 [2024-11-20 08:07:43.041497] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.142 [2024-11-20 08:07:43.083889] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:10:29.142 [2024-11-20 08:07:43.135717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.402 [2024-11-20 08:07:43.176066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:10:29.402 [2024-11-20 08:07:43.249878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.402 [2024-11-20 08:07:43.300851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:29.402 [2024-11-20 08:07:43.301393] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:10:29.402 [2024-11-20 08:07:43.343139] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:10:29.402 Running I/O for 1 seconds... 00:10:29.402 Running I/O for 1 seconds... 00:10:29.661 Running I/O for 1 seconds... 00:10:29.661 Running I/O for 1 seconds... 00:10:30.596 11716.00 IOPS, 45.77 MiB/s 00:10:30.596 Latency(us) 00:10:30.596 [2024-11-20T07:07:44.624Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.596 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:10:30.597 Nvme1n1 : 1.01 11777.39 46.01 0.00 0.00 10831.25 5149.26 13981.01 00:10:30.597 [2024-11-20T07:07:44.625Z] =================================================================================================================== 00:10:30.597 [2024-11-20T07:07:44.625Z] Total : 11777.39 46.01 0.00 0.00 10831.25 5149.26 13981.01 00:10:30.597 10734.00 IOPS, 41.93 MiB/s 00:10:30.597 Latency(us) 00:10:30.597 [2024-11-20T07:07:44.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.597 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:10:30.597 Nvme1n1 : 1.01 10801.61 42.19 0.00 0.00 11812.16 4712.35 20347.37 00:10:30.597 [2024-11-20T07:07:44.625Z] =================================================================================================================== 00:10:30.597 [2024-11-20T07:07:44.625Z] Total : 10801.61 42.19 0.00 0.00 11812.16 4712.35 20347.37 00:10:30.597 10467.00 IOPS, 40.89 MiB/s 00:10:30.597 Latency(us) 00:10:30.597 [2024-11-20T07:07:44.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.597 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:10:30.597 Nvme1n1 : 1.01 10536.62 41.16 0.00 0.00 12107.35 4556.31 23842.62 00:10:30.597 [2024-11-20T07:07:44.625Z] =================================================================================================================== 00:10:30.597 [2024-11-20T07:07:44.625Z] Total : 10536.62 41.16 0.00 0.00 12107.35 4556.31 23842.62 00:10:30.597 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1559101 00:10:30.597 254400.00 IOPS, 993.75 MiB/s 00:10:30.597 Latency(us) 00:10:30.597 [2024-11-20T07:07:44.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:30.597 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:10:30.597 Nvme1n1 : 1.00 254016.81 992.25 0.00 0.00 501.15 223.33 1505.77 00:10:30.597 [2024-11-20T07:07:44.625Z] =================================================================================================================== 00:10:30.597 [2024-11-20T07:07:44.625Z] Total : 254016.81 992.25 0.00 0.00 501.15 223.33 1505.77 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1559103 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1559106 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:10:30.854 rmmod nvme_tcp 00:10:30.854 rmmod nvme_fabrics 00:10:30.854 rmmod nvme_keyring 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 1558995 ']' 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 1558995 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1558995 ']' 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1558995 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1558995 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1558995' 00:10:30.854 killing process with pid 1558995 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1558995 00:10:30.854 08:07:44 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1558995 00:10:31.113 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:31.113 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:10:31.113 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@254 -- # local dev 00:10:31.113 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # remove_target_ns 00:10:31.113 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:31.113 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:31.113 08:07:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # delete_main_bridge 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # return 0 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@274 -- # iptr 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-save 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-restore 00:10:33.648 00:10:33.648 real 0m11.577s 00:10:33.648 user 0m19.349s 00:10:33.648 sys 0m6.199s 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:10:33.648 ************************************ 00:10:33.648 END TEST nvmf_bdev_io_wait 00:10:33.648 ************************************ 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:33.648 ************************************ 00:10:33.648 START TEST nvmf_queue_depth 00:10:33.648 ************************************ 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:10:33.648 * Looking for test storage... 00:10:33.648 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:10:33.648 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:33.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.649 --rc genhtml_branch_coverage=1 00:10:33.649 --rc genhtml_function_coverage=1 00:10:33.649 --rc genhtml_legend=1 00:10:33.649 --rc geninfo_all_blocks=1 00:10:33.649 --rc geninfo_unexecuted_blocks=1 00:10:33.649 00:10:33.649 ' 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:33.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.649 --rc genhtml_branch_coverage=1 00:10:33.649 --rc genhtml_function_coverage=1 00:10:33.649 --rc genhtml_legend=1 00:10:33.649 --rc geninfo_all_blocks=1 00:10:33.649 --rc geninfo_unexecuted_blocks=1 00:10:33.649 00:10:33.649 ' 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:33.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.649 --rc genhtml_branch_coverage=1 00:10:33.649 --rc genhtml_function_coverage=1 00:10:33.649 --rc genhtml_legend=1 00:10:33.649 --rc geninfo_all_blocks=1 00:10:33.649 --rc geninfo_unexecuted_blocks=1 00:10:33.649 00:10:33.649 ' 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:33.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:33.649 --rc genhtml_branch_coverage=1 00:10:33.649 --rc genhtml_function_coverage=1 00:10:33.649 --rc genhtml_legend=1 00:10:33.649 --rc geninfo_all_blocks=1 00:10:33.649 --rc geninfo_unexecuted_blocks=1 00:10:33.649 00:10:33.649 ' 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:33.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:10:33.649 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # xtrace_disable 00:10:33.650 08:07:47 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.259 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:40.259 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@131 -- # pci_devs=() 00:10:40.259 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@131 -- # local -a pci_devs 00:10:40.259 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@132 -- # pci_net_devs=() 00:10:40.259 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:10:40.259 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@133 -- # pci_drivers=() 00:10:40.259 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@133 -- # local -A pci_drivers 00:10:40.259 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@135 -- # net_devs=() 00:10:40.259 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@135 -- # local -ga net_devs 00:10:40.259 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@136 -- # e810=() 00:10:40.259 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@136 -- # local -ga e810 00:10:40.259 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@137 -- # x722=() 00:10:40.259 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@137 -- # local -ga x722 00:10:40.259 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@138 -- # mlx=() 00:10:40.259 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@138 -- # local -ga mlx 00:10:40.259 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:40.259 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:40.259 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:40.260 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:40.260 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:40.260 Found net devices under 0000:86:00.0: cvl_0_0 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:40.260 Found net devices under 0000:86:00.1: cvl_0_1 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # is_hw=yes 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@247 -- # create_target_ns 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:10:40.260 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:10:40.260 10.0.0.1 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:10:40.261 10.0.0.2 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 1 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:40.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:40.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.415 ms 00:10:40.261 00:10:40.261 --- 10.0.0.1 ping statistics --- 00:10:40.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.261 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:10:40.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:40.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:10:40.261 00:10:40.261 --- 10.0.0.2 ping statistics --- 00:10:40.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:40.261 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair++ )) 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # return 0 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:10:40.261 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator1 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # return 1 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev= 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@160 -- # return 0 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target1 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target1 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # return 1 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev= 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@160 -- # return 0 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:10:40.262 ' 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=1563131 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 1563131 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1563131 ']' 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:40.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.262 [2024-11-20 08:07:53.584512] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:10:40.262 [2024-11-20 08:07:53.584559] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:40.262 [2024-11-20 08:07:53.662989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.262 [2024-11-20 08:07:53.703100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:40.262 [2024-11-20 08:07:53.703134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:40.262 [2024-11-20 08:07:53.703141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:40.262 [2024-11-20 08:07:53.703146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:40.262 [2024-11-20 08:07:53.703151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:40.262 [2024-11-20 08:07:53.703711] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.262 [2024-11-20 08:07:53.834471] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.262 Malloc0 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.262 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:40.263 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.263 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.263 [2024-11-20 08:07:53.884644] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:40.263 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.263 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1563156 00:10:40.263 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:40.263 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:10:40.263 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1563156 /var/tmp/bdevperf.sock 00:10:40.263 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1563156 ']' 00:10:40.263 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:10:40.263 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:40.263 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:10:40.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:10:40.263 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:40.263 08:07:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.263 [2024-11-20 08:07:53.935578] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:10:40.263 [2024-11-20 08:07:53.935618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1563156 ] 00:10:40.263 [2024-11-20 08:07:54.009051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:40.263 [2024-11-20 08:07:54.052613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:40.263 08:07:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:40.263 08:07:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:10:40.263 08:07:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:10:40.263 08:07:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:40.263 08:07:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:40.521 NVMe0n1 00:10:40.521 08:07:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:40.521 08:07:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:10:40.521 Running I/O for 10 seconds... 00:10:42.395 12022.00 IOPS, 46.96 MiB/s [2024-11-20T07:07:57.801Z] 12285.00 IOPS, 47.99 MiB/s [2024-11-20T07:07:58.736Z] 12301.00 IOPS, 48.05 MiB/s [2024-11-20T07:07:59.673Z] 12336.75 IOPS, 48.19 MiB/s [2024-11-20T07:08:00.609Z] 12343.80 IOPS, 48.22 MiB/s [2024-11-20T07:08:01.546Z] 12354.50 IOPS, 48.26 MiB/s [2024-11-20T07:08:02.483Z] 12419.14 IOPS, 48.51 MiB/s [2024-11-20T07:08:03.859Z] 12407.00 IOPS, 48.46 MiB/s [2024-11-20T07:08:04.796Z] 12426.33 IOPS, 48.54 MiB/s [2024-11-20T07:08:04.796Z] 12476.90 IOPS, 48.74 MiB/s 00:10:50.768 Latency(us) 00:10:50.768 [2024-11-20T07:08:04.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.768 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:10:50.768 Verification LBA range: start 0x0 length 0x4000 00:10:50.768 NVMe0n1 : 10.07 12496.25 48.81 0.00 0.00 81694.27 18849.40 52928.12 00:10:50.768 [2024-11-20T07:08:04.796Z] =================================================================================================================== 00:10:50.768 [2024-11-20T07:08:04.796Z] Total : 12496.25 48.81 0.00 0.00 81694.27 18849.40 52928.12 00:10:50.768 { 00:10:50.768 "results": [ 00:10:50.768 { 00:10:50.768 "job": "NVMe0n1", 00:10:50.768 "core_mask": "0x1", 00:10:50.768 "workload": "verify", 00:10:50.768 "status": "finished", 00:10:50.768 "verify_range": { 00:10:50.768 "start": 0, 00:10:50.768 "length": 16384 00:10:50.768 }, 00:10:50.768 "queue_depth": 1024, 00:10:50.768 "io_size": 4096, 00:10:50.768 "runtime": 10.065338, 00:10:50.768 "iops": 12496.251988755866, 00:10:50.768 "mibps": 48.8134843310776, 00:10:50.768 "io_failed": 0, 00:10:50.768 "io_timeout": 0, 00:10:50.768 "avg_latency_us": 81694.26549263466, 00:10:50.768 "min_latency_us": 18849.401904761904, 00:10:50.768 "max_latency_us": 52928.1219047619 00:10:50.768 } 00:10:50.768 ], 00:10:50.768 "core_count": 1 00:10:50.768 } 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1563156 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1563156 ']' 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1563156 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1563156 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1563156' 00:10:50.768 killing process with pid 1563156 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1563156 00:10:50.768 Received shutdown signal, test time was about 10.000000 seconds 00:10:50.768 00:10:50.768 Latency(us) 00:10:50.768 [2024-11-20T07:08:04.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:50.768 [2024-11-20T07:08:04.796Z] =================================================================================================================== 00:10:50.768 [2024-11-20T07:08:04.796Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1563156 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:10:50.768 rmmod nvme_tcp 00:10:50.768 rmmod nvme_fabrics 00:10:50.768 rmmod nvme_keyring 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 1563131 ']' 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 1563131 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1563131 ']' 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1563131 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:10:50.768 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:51.027 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1563131 00:10:51.027 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:51.027 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:51.027 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1563131' 00:10:51.027 killing process with pid 1563131 00:10:51.027 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1563131 00:10:51.027 08:08:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1563131 00:10:51.027 08:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:51.027 08:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:10:51.027 08:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@254 -- # local dev 00:10:51.027 08:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@257 -- # remove_target_ns 00:10:51.027 08:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:51.027 08:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:51.027 08:08:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@258 -- # delete_main_bridge 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@121 -- # return 0 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@274 -- # iptr 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-save 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-restore 00:10:53.563 00:10:53.563 real 0m19.919s 00:10:53.563 user 0m23.226s 00:10:53.563 sys 0m6.136s 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:10:53.563 ************************************ 00:10:53.563 END TEST nvmf_queue_depth 00:10:53.563 ************************************ 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:53.563 ************************************ 00:10:53.563 START TEST nvmf_target_multipath 00:10:53.563 ************************************ 00:10:53.563 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:10:53.564 * Looking for test storage... 00:10:53.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:53.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.564 --rc genhtml_branch_coverage=1 00:10:53.564 --rc genhtml_function_coverage=1 00:10:53.564 --rc genhtml_legend=1 00:10:53.564 --rc geninfo_all_blocks=1 00:10:53.564 --rc geninfo_unexecuted_blocks=1 00:10:53.564 00:10:53.564 ' 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:53.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.564 --rc genhtml_branch_coverage=1 00:10:53.564 --rc genhtml_function_coverage=1 00:10:53.564 --rc genhtml_legend=1 00:10:53.564 --rc geninfo_all_blocks=1 00:10:53.564 --rc geninfo_unexecuted_blocks=1 00:10:53.564 00:10:53.564 ' 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:53.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.564 --rc genhtml_branch_coverage=1 00:10:53.564 --rc genhtml_function_coverage=1 00:10:53.564 --rc genhtml_legend=1 00:10:53.564 --rc geninfo_all_blocks=1 00:10:53.564 --rc geninfo_unexecuted_blocks=1 00:10:53.564 00:10:53.564 ' 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:53.564 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:53.564 --rc genhtml_branch_coverage=1 00:10:53.564 --rc genhtml_function_coverage=1 00:10:53.564 --rc genhtml_legend=1 00:10:53.564 --rc geninfo_all_blocks=1 00:10:53.564 --rc geninfo_unexecuted_blocks=1 00:10:53.564 00:10:53.564 ' 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@50 -- # : 0 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:53.564 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:53.565 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # remove_target_ns 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # xtrace_disable 00:10:53.565 08:08:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@131 -- # pci_devs=() 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@135 -- # net_devs=() 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@136 -- # e810=() 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@136 -- # local -ga e810 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@137 -- # x722=() 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@137 -- # local -ga x722 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@138 -- # mlx=() 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@138 -- # local -ga mlx 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:00.133 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:00.134 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:00.134 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:00.134 Found net devices under 0000:86:00.0: cvl_0_0 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:00.134 Found net devices under 0000:86:00.1: cvl_0_1 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # is_hw=yes 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@247 -- # create_target_ns 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@28 -- # local -g _dev 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772161 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:00.134 10.0.0.1 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772162 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:00.134 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:00.135 10.0.0.2 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:00.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:00.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.365 ms 00:11:00.135 00:11:00.135 --- 10.0.0.1 ping statistics --- 00:11:00.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.135 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:11:00.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:00.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:11:00.135 00:11:00.135 --- 10.0.0.2 ping statistics --- 00:11:00.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:00.135 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # return 0 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:11:00.135 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # return 1 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev= 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@160 -- # return 0 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@100 -- # return 1 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev= 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@160 -- # return 0 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:11:00.136 ' 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:11:00.136 only one NIC for nvmf test 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:00.136 rmmod nvme_tcp 00:11:00.136 rmmod nvme_fabrics 00:11:00.136 rmmod nvme_keyring 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:00.136 08:08:13 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:11:02.042 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:11:02.042 00:11:02.042 real 0m8.546s 00:11:02.042 user 0m1.903s 00:11:02.042 sys 0m4.653s 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:11:02.043 ************************************ 00:11:02.043 END TEST nvmf_target_multipath 00:11:02.043 ************************************ 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:02.043 ************************************ 00:11:02.043 START TEST nvmf_zcopy 00:11:02.043 ************************************ 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:11:02.043 * Looking for test storage... 00:11:02.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:02.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.043 --rc genhtml_branch_coverage=1 00:11:02.043 --rc genhtml_function_coverage=1 00:11:02.043 --rc genhtml_legend=1 00:11:02.043 --rc geninfo_all_blocks=1 00:11:02.043 --rc geninfo_unexecuted_blocks=1 00:11:02.043 00:11:02.043 ' 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:02.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.043 --rc genhtml_branch_coverage=1 00:11:02.043 --rc genhtml_function_coverage=1 00:11:02.043 --rc genhtml_legend=1 00:11:02.043 --rc geninfo_all_blocks=1 00:11:02.043 --rc geninfo_unexecuted_blocks=1 00:11:02.043 00:11:02.043 ' 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:02.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.043 --rc genhtml_branch_coverage=1 00:11:02.043 --rc genhtml_function_coverage=1 00:11:02.043 --rc genhtml_legend=1 00:11:02.043 --rc geninfo_all_blocks=1 00:11:02.043 --rc geninfo_unexecuted_blocks=1 00:11:02.043 00:11:02.043 ' 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:02.043 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.043 --rc genhtml_branch_coverage=1 00:11:02.043 --rc genhtml_function_coverage=1 00:11:02.043 --rc genhtml_legend=1 00:11:02.043 --rc geninfo_all_blocks=1 00:11:02.043 --rc geninfo_unexecuted_blocks=1 00:11:02.043 00:11:02.043 ' 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:11:02.043 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:02.044 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # xtrace_disable 00:11:02.044 08:08:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@131 -- # pci_devs=() 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@135 -- # net_devs=() 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@136 -- # e810=() 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@136 -- # local -ga e810 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@137 -- # x722=() 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@137 -- # local -ga x722 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@138 -- # mlx=() 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@138 -- # local -ga mlx 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:08.685 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:08.686 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:08.686 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:08.686 Found net devices under 0000:86:00.0: cvl_0_0 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:08.686 Found net devices under 0000:86:00.1: cvl_0_1 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # is_hw=yes 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@247 -- # create_target_ns 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:11:08.686 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:08.687 10.0.0.1 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:08.687 10.0.0.2 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:08.687 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:08.688 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:08.688 08:08:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:08.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:08.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.422 ms 00:11:08.688 00:11:08.688 --- 10.0.0.1 ping statistics --- 00:11:08.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.688 rtt min/avg/max/mdev = 0.422/0.422/0.422/0.000 ms 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:11:08.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:08.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:11:08.688 00:11:08.688 --- 10.0.0.2 ping statistics --- 00:11:08.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:08.688 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair++ )) 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # return 0 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator1 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # return 1 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev= 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@160 -- # return 0 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:08.688 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target1 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target1 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # return 1 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev= 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@160 -- # return 0 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:11:08.689 ' 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=1572102 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 1572102 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1572102 ']' 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:08.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:08.689 [2024-11-20 08:08:22.191764] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:11:08.689 [2024-11-20 08:08:22.191816] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:08.689 [2024-11-20 08:08:22.271189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.689 [2024-11-20 08:08:22.311835] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:08.689 [2024-11-20 08:08:22.311871] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:08.689 [2024-11-20 08:08:22.311878] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:08.689 [2024-11-20 08:08:22.311884] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:08.689 [2024-11-20 08:08:22.311889] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:08.689 [2024-11-20 08:08:22.312476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:08.689 [2024-11-20 08:08:22.448174] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:08.689 [2024-11-20 08:08:22.468359] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.689 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:08.690 malloc0 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:11:08.690 { 00:11:08.690 "params": { 00:11:08.690 "name": "Nvme$subsystem", 00:11:08.690 "trtype": "$TEST_TRANSPORT", 00:11:08.690 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:08.690 "adrfam": "ipv4", 00:11:08.690 "trsvcid": "$NVMF_PORT", 00:11:08.690 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:08.690 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:08.690 "hdgst": ${hdgst:-false}, 00:11:08.690 "ddgst": ${ddgst:-false} 00:11:08.690 }, 00:11:08.690 "method": "bdev_nvme_attach_controller" 00:11:08.690 } 00:11:08.690 EOF 00:11:08.690 )") 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:11:08.690 08:08:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:11:08.690 "params": { 00:11:08.690 "name": "Nvme1", 00:11:08.690 "trtype": "tcp", 00:11:08.690 "traddr": "10.0.0.2", 00:11:08.690 "adrfam": "ipv4", 00:11:08.690 "trsvcid": "4420", 00:11:08.690 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:08.690 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:08.690 "hdgst": false, 00:11:08.690 "ddgst": false 00:11:08.690 }, 00:11:08.690 "method": "bdev_nvme_attach_controller" 00:11:08.690 }' 00:11:08.690 [2024-11-20 08:08:22.550326] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:11:08.690 [2024-11-20 08:08:22.550370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1572259 ] 00:11:08.690 [2024-11-20 08:08:22.625446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:08.690 [2024-11-20 08:08:22.665902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:08.948 Running I/O for 10 seconds... 00:11:11.258 8660.00 IOPS, 67.66 MiB/s [2024-11-20T07:08:26.221Z] 8740.00 IOPS, 68.28 MiB/s [2024-11-20T07:08:27.156Z] 8761.67 IOPS, 68.45 MiB/s [2024-11-20T07:08:28.092Z] 8770.00 IOPS, 68.52 MiB/s [2024-11-20T07:08:29.026Z] 8778.00 IOPS, 68.58 MiB/s [2024-11-20T07:08:29.958Z] 8791.33 IOPS, 68.68 MiB/s [2024-11-20T07:08:31.333Z] 8791.43 IOPS, 68.68 MiB/s [2024-11-20T07:08:31.899Z] 8775.62 IOPS, 68.56 MiB/s [2024-11-20T07:08:33.275Z] 8779.33 IOPS, 68.59 MiB/s [2024-11-20T07:08:33.275Z] 8781.70 IOPS, 68.61 MiB/s 00:11:19.247 Latency(us) 00:11:19.247 [2024-11-20T07:08:33.275Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:19.247 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:11:19.247 Verification LBA range: start 0x0 length 0x1000 00:11:19.247 Nvme1n1 : 10.05 8749.79 68.36 0.00 0.00 14527.47 1895.86 42692.02 00:11:19.247 [2024-11-20T07:08:33.275Z] =================================================================================================================== 00:11:19.247 [2024-11-20T07:08:33.275Z] Total : 8749.79 68.36 0.00 0.00 14527.47 1895.86 42692.02 00:11:19.247 08:08:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1573958 00:11:19.247 08:08:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:11:19.247 08:08:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:19.247 08:08:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:11:19.247 08:08:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:11:19.247 08:08:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:11:19.248 08:08:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:11:19.248 08:08:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:11:19.248 08:08:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:11:19.248 { 00:11:19.248 "params": { 00:11:19.248 "name": "Nvme$subsystem", 00:11:19.248 "trtype": "$TEST_TRANSPORT", 00:11:19.248 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:19.248 "adrfam": "ipv4", 00:11:19.248 "trsvcid": "$NVMF_PORT", 00:11:19.248 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:19.248 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:19.248 "hdgst": ${hdgst:-false}, 00:11:19.248 "ddgst": ${ddgst:-false} 00:11:19.248 }, 00:11:19.248 "method": "bdev_nvme_attach_controller" 00:11:19.248 } 00:11:19.248 EOF 00:11:19.248 )") 00:11:19.248 08:08:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:11:19.248 [2024-11-20 08:08:33.105170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.248 [2024-11-20 08:08:33.105207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.248 08:08:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:11:19.248 08:08:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:11:19.248 08:08:33 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:11:19.248 "params": { 00:11:19.248 "name": "Nvme1", 00:11:19.248 "trtype": "tcp", 00:11:19.248 "traddr": "10.0.0.2", 00:11:19.248 "adrfam": "ipv4", 00:11:19.248 "trsvcid": "4420", 00:11:19.248 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:19.248 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:19.248 "hdgst": false, 00:11:19.248 "ddgst": false 00:11:19.248 }, 00:11:19.248 "method": "bdev_nvme_attach_controller" 00:11:19.248 }' 00:11:19.248 [2024-11-20 08:08:33.117174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.248 [2024-11-20 08:08:33.117187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.248 [2024-11-20 08:08:33.129206] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.248 [2024-11-20 08:08:33.129216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.248 [2024-11-20 08:08:33.141238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.248 [2024-11-20 08:08:33.141247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.248 [2024-11-20 08:08:33.147684] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:11:19.248 [2024-11-20 08:08:33.147725] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1573958 ] 00:11:19.248 [2024-11-20 08:08:33.153264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.248 [2024-11-20 08:08:33.153274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.248 [2024-11-20 08:08:33.165299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.248 [2024-11-20 08:08:33.165309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.248 [2024-11-20 08:08:33.177334] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.248 [2024-11-20 08:08:33.177343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.248 [2024-11-20 08:08:33.189363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.248 [2024-11-20 08:08:33.189372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.248 [2024-11-20 08:08:33.201395] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.248 [2024-11-20 08:08:33.201404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.248 [2024-11-20 08:08:33.213429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.248 [2024-11-20 08:08:33.213440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.248 [2024-11-20 08:08:33.222175] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:19.248 [2024-11-20 08:08:33.225468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.248 [2024-11-20 08:08:33.225482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.248 [2024-11-20 08:08:33.237504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.248 [2024-11-20 08:08:33.237517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.248 [2024-11-20 08:08:33.249534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.248 [2024-11-20 08:08:33.249544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.248 [2024-11-20 08:08:33.261568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.248 [2024-11-20 08:08:33.261579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.248 [2024-11-20 08:08:33.264006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.506 [2024-11-20 08:08:33.273603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.506 [2024-11-20 08:08:33.273615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.506 [2024-11-20 08:08:33.285639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.506 [2024-11-20 08:08:33.285664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.506 [2024-11-20 08:08:33.297667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.506 [2024-11-20 08:08:33.297682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.506 [2024-11-20 08:08:33.309696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.506 [2024-11-20 08:08:33.309707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.506 [2024-11-20 08:08:33.321729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.506 [2024-11-20 08:08:33.321741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.506 [2024-11-20 08:08:33.333757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.506 [2024-11-20 08:08:33.333768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.506 [2024-11-20 08:08:33.345787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.506 [2024-11-20 08:08:33.345796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.506 [2024-11-20 08:08:33.357872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.506 [2024-11-20 08:08:33.357892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.506 [2024-11-20 08:08:33.369900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.506 [2024-11-20 08:08:33.369916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.506 [2024-11-20 08:08:33.381937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.506 [2024-11-20 08:08:33.381952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.506 [2024-11-20 08:08:33.393964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.506 [2024-11-20 08:08:33.393975] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.506 [2024-11-20 08:08:33.405996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.506 [2024-11-20 08:08:33.406007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.506 [2024-11-20 08:08:33.418033] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.506 [2024-11-20 08:08:33.418044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.506 [2024-11-20 08:08:33.430070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.506 [2024-11-20 08:08:33.430084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.506 [2024-11-20 08:08:33.442102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.506 [2024-11-20 08:08:33.442115] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.506 [2024-11-20 08:08:33.493408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.506 [2024-11-20 08:08:33.493427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.506 [2024-11-20 08:08:33.502267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.506 [2024-11-20 08:08:33.502280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.506 Running I/O for 5 seconds... 00:11:19.506 [2024-11-20 08:08:33.517780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.506 [2024-11-20 08:08:33.517800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.765 [2024-11-20 08:08:33.532488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.765 [2024-11-20 08:08:33.532512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.765 [2024-11-20 08:08:33.543266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.765 [2024-11-20 08:08:33.543285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.765 [2024-11-20 08:08:33.557158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.765 [2024-11-20 08:08:33.557177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.765 [2024-11-20 08:08:33.570528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.765 [2024-11-20 08:08:33.570548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.765 [2024-11-20 08:08:33.584605] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.765 [2024-11-20 08:08:33.584624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.765 [2024-11-20 08:08:33.598635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.765 [2024-11-20 08:08:33.598653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.765 [2024-11-20 08:08:33.612412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.765 [2024-11-20 08:08:33.612432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.765 [2024-11-20 08:08:33.626122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.765 [2024-11-20 08:08:33.626141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.765 [2024-11-20 08:08:33.640156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.765 [2024-11-20 08:08:33.640176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.765 [2024-11-20 08:08:33.654191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.765 [2024-11-20 08:08:33.654216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.765 [2024-11-20 08:08:33.667589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.765 [2024-11-20 08:08:33.667607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.765 [2024-11-20 08:08:33.681741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.765 [2024-11-20 08:08:33.681759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.765 [2024-11-20 08:08:33.695636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.765 [2024-11-20 08:08:33.695655] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.765 [2024-11-20 08:08:33.709506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.765 [2024-11-20 08:08:33.709525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.765 [2024-11-20 08:08:33.723425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.765 [2024-11-20 08:08:33.723444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.765 [2024-11-20 08:08:33.737376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.766 [2024-11-20 08:08:33.737395] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.766 [2024-11-20 08:08:33.751484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.766 [2024-11-20 08:08:33.751502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.766 [2024-11-20 08:08:33.765623] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.766 [2024-11-20 08:08:33.765642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.766 [2024-11-20 08:08:33.777002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.766 [2024-11-20 08:08:33.777020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:19.766 [2024-11-20 08:08:33.786384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:19.766 [2024-11-20 08:08:33.786402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.024 [2024-11-20 08:08:33.796310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.025 [2024-11-20 08:08:33.796330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.025 [2024-11-20 08:08:33.810683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.025 [2024-11-20 08:08:33.810702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.025 [2024-11-20 08:08:33.824137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.025 [2024-11-20 08:08:33.824155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.025 [2024-11-20 08:08:33.838103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.025 [2024-11-20 08:08:33.838122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.025 [2024-11-20 08:08:33.851929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.025 [2024-11-20 08:08:33.851948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.025 [2024-11-20 08:08:33.865841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.025 [2024-11-20 08:08:33.865859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.025 [2024-11-20 08:08:33.879693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.025 [2024-11-20 08:08:33.879712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.025 [2024-11-20 08:08:33.893827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.025 [2024-11-20 08:08:33.893845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.025 [2024-11-20 08:08:33.907410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.025 [2024-11-20 08:08:33.907427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.025 [2024-11-20 08:08:33.921569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.025 [2024-11-20 08:08:33.921588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.025 [2024-11-20 08:08:33.935783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.025 [2024-11-20 08:08:33.935802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.025 [2024-11-20 08:08:33.947062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.025 [2024-11-20 08:08:33.947080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.025 [2024-11-20 08:08:33.960930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.025 [2024-11-20 08:08:33.960948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.025 [2024-11-20 08:08:33.974527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.025 [2024-11-20 08:08:33.974545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.025 [2024-11-20 08:08:33.988237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.025 [2024-11-20 08:08:33.988255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.025 [2024-11-20 08:08:34.002153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.025 [2024-11-20 08:08:34.002171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.025 [2024-11-20 08:08:34.015849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.025 [2024-11-20 08:08:34.015868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.025 [2024-11-20 08:08:34.029723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.025 [2024-11-20 08:08:34.029741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.025 [2024-11-20 08:08:34.043544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.025 [2024-11-20 08:08:34.043563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.284 [2024-11-20 08:08:34.057327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.284 [2024-11-20 08:08:34.057346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.284 [2024-11-20 08:08:34.070823] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.284 [2024-11-20 08:08:34.070841] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.284 [2024-11-20 08:08:34.084659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.284 [2024-11-20 08:08:34.084677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.284 [2024-11-20 08:08:34.098514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.284 [2024-11-20 08:08:34.098532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.284 [2024-11-20 08:08:34.111878] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.284 [2024-11-20 08:08:34.111897] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.284 [2024-11-20 08:08:34.125923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.284 [2024-11-20 08:08:34.125942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.284 [2024-11-20 08:08:34.139548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.284 [2024-11-20 08:08:34.139566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.284 [2024-11-20 08:08:34.153529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.284 [2024-11-20 08:08:34.153548] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.284 [2024-11-20 08:08:34.167383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.284 [2024-11-20 08:08:34.167401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.284 [2024-11-20 08:08:34.180936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.284 [2024-11-20 08:08:34.180955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.284 [2024-11-20 08:08:34.194689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.284 [2024-11-20 08:08:34.194707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.284 [2024-11-20 08:08:34.208534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.284 [2024-11-20 08:08:34.208553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.284 [2024-11-20 08:08:34.222137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.284 [2024-11-20 08:08:34.222155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.284 [2024-11-20 08:08:34.236272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.284 [2024-11-20 08:08:34.236290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.284 [2024-11-20 08:08:34.249972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.284 [2024-11-20 08:08:34.249989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.284 [2024-11-20 08:08:34.263753] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.284 [2024-11-20 08:08:34.263771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.284 [2024-11-20 08:08:34.277080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.284 [2024-11-20 08:08:34.277099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.284 [2024-11-20 08:08:34.291132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.284 [2024-11-20 08:08:34.291150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.284 [2024-11-20 08:08:34.304967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.284 [2024-11-20 08:08:34.304986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.543 [2024-11-20 08:08:34.318851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.543 [2024-11-20 08:08:34.318869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.543 [2024-11-20 08:08:34.332660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.543 [2024-11-20 08:08:34.332678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.543 [2024-11-20 08:08:34.346237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.543 [2024-11-20 08:08:34.346255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.543 [2024-11-20 08:08:34.360433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.543 [2024-11-20 08:08:34.360461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.543 [2024-11-20 08:08:34.371093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.543 [2024-11-20 08:08:34.371110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.543 [2024-11-20 08:08:34.385711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.543 [2024-11-20 08:08:34.385728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.543 [2024-11-20 08:08:34.399879] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.543 [2024-11-20 08:08:34.399898] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.543 [2024-11-20 08:08:34.413830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.543 [2024-11-20 08:08:34.413848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.543 [2024-11-20 08:08:34.427870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.543 [2024-11-20 08:08:34.427888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.543 [2024-11-20 08:08:34.441419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.543 [2024-11-20 08:08:34.441437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.543 [2024-11-20 08:08:34.455211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.543 [2024-11-20 08:08:34.455231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.543 [2024-11-20 08:08:34.469193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.543 [2024-11-20 08:08:34.469218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.543 [2024-11-20 08:08:34.483410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.543 [2024-11-20 08:08:34.483428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.543 [2024-11-20 08:08:34.494181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.543 [2024-11-20 08:08:34.494198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.543 [2024-11-20 08:08:34.508909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.543 [2024-11-20 08:08:34.508927] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.543 16765.00 IOPS, 130.98 MiB/s [2024-11-20T07:08:34.571Z] [2024-11-20 08:08:34.520101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.543 [2024-11-20 08:08:34.520119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.543 [2024-11-20 08:08:34.534510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.543 [2024-11-20 08:08:34.534529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.543 [2024-11-20 08:08:34.547940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.543 [2024-11-20 08:08:34.547958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.543 [2024-11-20 08:08:34.561606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.543 [2024-11-20 08:08:34.561626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.801 [2024-11-20 08:08:34.575743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.801 [2024-11-20 08:08:34.575767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.801 [2024-11-20 08:08:34.589359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.801 [2024-11-20 08:08:34.589377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.801 [2024-11-20 08:08:34.603089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.801 [2024-11-20 08:08:34.603108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.801 [2024-11-20 08:08:34.616859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.801 [2024-11-20 08:08:34.616877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.801 [2024-11-20 08:08:34.630508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.801 [2024-11-20 08:08:34.630526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.801 [2024-11-20 08:08:34.644424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.801 [2024-11-20 08:08:34.644442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.801 [2024-11-20 08:08:34.658268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.801 [2024-11-20 08:08:34.658286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.801 [2024-11-20 08:08:34.672599] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.801 [2024-11-20 08:08:34.672617] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.801 [2024-11-20 08:08:34.686320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.801 [2024-11-20 08:08:34.686338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.801 [2024-11-20 08:08:34.700210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.801 [2024-11-20 08:08:34.700229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.801 [2024-11-20 08:08:34.714013] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.801 [2024-11-20 08:08:34.714032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.801 [2024-11-20 08:08:34.728064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.801 [2024-11-20 08:08:34.728083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.801 [2024-11-20 08:08:34.741863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.801 [2024-11-20 08:08:34.741882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.801 [2024-11-20 08:08:34.751456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.801 [2024-11-20 08:08:34.751475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.801 [2024-11-20 08:08:34.766026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.801 [2024-11-20 08:08:34.766046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.801 [2024-11-20 08:08:34.777198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.801 [2024-11-20 08:08:34.777226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.801 [2024-11-20 08:08:34.791685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.801 [2024-11-20 08:08:34.791705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.802 [2024-11-20 08:08:34.805391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.802 [2024-11-20 08:08:34.805410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:20.802 [2024-11-20 08:08:34.819661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:20.802 [2024-11-20 08:08:34.819680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.060 [2024-11-20 08:08:34.833466] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.060 [2024-11-20 08:08:34.833490] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.060 [2024-11-20 08:08:34.847363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.060 [2024-11-20 08:08:34.847390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.060 [2024-11-20 08:08:34.861236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.060 [2024-11-20 08:08:34.861255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.060 [2024-11-20 08:08:34.875505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.060 [2024-11-20 08:08:34.875525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.060 [2024-11-20 08:08:34.886335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.060 [2024-11-20 08:08:34.886354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.060 [2024-11-20 08:08:34.901097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.060 [2024-11-20 08:08:34.901116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.060 [2024-11-20 08:08:34.912332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.060 [2024-11-20 08:08:34.912351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.060 [2024-11-20 08:08:34.926298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.060 [2024-11-20 08:08:34.926317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.060 [2024-11-20 08:08:34.940389] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.060 [2024-11-20 08:08:34.940408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.060 [2024-11-20 08:08:34.951524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.060 [2024-11-20 08:08:34.951543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.060 [2024-11-20 08:08:34.960824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.060 [2024-11-20 08:08:34.960842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.060 [2024-11-20 08:08:34.970079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.060 [2024-11-20 08:08:34.970096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.060 [2024-11-20 08:08:34.984837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.060 [2024-11-20 08:08:34.984855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.060 [2024-11-20 08:08:35.000520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.060 [2024-11-20 08:08:35.000539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.060 [2024-11-20 08:08:35.014116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.060 [2024-11-20 08:08:35.014135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.060 [2024-11-20 08:08:35.028197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.060 [2024-11-20 08:08:35.028223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.060 [2024-11-20 08:08:35.041884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.060 [2024-11-20 08:08:35.041902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.060 [2024-11-20 08:08:35.055843] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.060 [2024-11-20 08:08:35.055861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.060 [2024-11-20 08:08:35.069677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.060 [2024-11-20 08:08:35.069700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.319 [2024-11-20 08:08:35.083717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.319 [2024-11-20 08:08:35.083741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.319 [2024-11-20 08:08:35.097943] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.319 [2024-11-20 08:08:35.097967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.319 [2024-11-20 08:08:35.111620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.319 [2024-11-20 08:08:35.111638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.319 [2024-11-20 08:08:35.125312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.319 [2024-11-20 08:08:35.125331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.319 [2024-11-20 08:08:35.139014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.319 [2024-11-20 08:08:35.139032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.319 [2024-11-20 08:08:35.152938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.319 [2024-11-20 08:08:35.152957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.319 [2024-11-20 08:08:35.166615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.319 [2024-11-20 08:08:35.166633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.319 [2024-11-20 08:08:35.180149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.319 [2024-11-20 08:08:35.180167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.319 [2024-11-20 08:08:35.193922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.319 [2024-11-20 08:08:35.193941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.319 [2024-11-20 08:08:35.207795] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.319 [2024-11-20 08:08:35.207813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.319 [2024-11-20 08:08:35.221268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.319 [2024-11-20 08:08:35.221287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.319 [2024-11-20 08:08:35.235142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.319 [2024-11-20 08:08:35.235160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.319 [2024-11-20 08:08:35.249148] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.319 [2024-11-20 08:08:35.249166] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.319 [2024-11-20 08:08:35.262650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.319 [2024-11-20 08:08:35.262668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.319 [2024-11-20 08:08:35.276367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.319 [2024-11-20 08:08:35.276385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.319 [2024-11-20 08:08:35.290767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.319 [2024-11-20 08:08:35.290785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.319 [2024-11-20 08:08:35.306659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.319 [2024-11-20 08:08:35.306678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.320 [2024-11-20 08:08:35.320984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.320 [2024-11-20 08:08:35.321002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.320 [2024-11-20 08:08:35.335090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.320 [2024-11-20 08:08:35.335107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.579 [2024-11-20 08:08:35.348805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.579 [2024-11-20 08:08:35.348825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.579 [2024-11-20 08:08:35.362474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.579 [2024-11-20 08:08:35.362492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.579 [2024-11-20 08:08:35.376285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.579 [2024-11-20 08:08:35.376303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.579 [2024-11-20 08:08:35.389776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.579 [2024-11-20 08:08:35.389795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.579 [2024-11-20 08:08:35.403638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.579 [2024-11-20 08:08:35.403657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.579 [2024-11-20 08:08:35.417515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.579 [2024-11-20 08:08:35.417535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.579 [2024-11-20 08:08:35.431221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.579 [2024-11-20 08:08:35.431256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.579 [2024-11-20 08:08:35.445107] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.579 [2024-11-20 08:08:35.445126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.579 [2024-11-20 08:08:35.459070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.579 [2024-11-20 08:08:35.459088] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.579 [2024-11-20 08:08:35.473548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.579 [2024-11-20 08:08:35.473565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.579 [2024-11-20 08:08:35.488540] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.579 [2024-11-20 08:08:35.488558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.579 [2024-11-20 08:08:35.502218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.579 [2024-11-20 08:08:35.502236] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.579 16840.50 IOPS, 131.57 MiB/s [2024-11-20T07:08:35.607Z] [2024-11-20 08:08:35.515772] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.579 [2024-11-20 08:08:35.515796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.579 [2024-11-20 08:08:35.529758] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.579 [2024-11-20 08:08:35.529776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.580 [2024-11-20 08:08:35.543590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.580 [2024-11-20 08:08:35.543608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.580 [2024-11-20 08:08:35.557176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.580 [2024-11-20 08:08:35.557194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.580 [2024-11-20 08:08:35.570676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.580 [2024-11-20 08:08:35.570694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.580 [2024-11-20 08:08:35.584001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.580 [2024-11-20 08:08:35.584019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.580 [2024-11-20 08:08:35.597992] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.580 [2024-11-20 08:08:35.598015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.839 [2024-11-20 08:08:35.612167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.839 [2024-11-20 08:08:35.612185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.839 [2024-11-20 08:08:35.623115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.839 [2024-11-20 08:08:35.623133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.839 [2024-11-20 08:08:35.637278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.839 [2024-11-20 08:08:35.637297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.839 [2024-11-20 08:08:35.650865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.839 [2024-11-20 08:08:35.650883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.839 [2024-11-20 08:08:35.664682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.839 [2024-11-20 08:08:35.664701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.839 [2024-11-20 08:08:35.678314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.839 [2024-11-20 08:08:35.678333] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.839 [2024-11-20 08:08:35.692568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.839 [2024-11-20 08:08:35.692586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.839 [2024-11-20 08:08:35.706130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.839 [2024-11-20 08:08:35.706148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.839 [2024-11-20 08:08:35.714883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.839 [2024-11-20 08:08:35.714901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.840 [2024-11-20 08:08:35.729200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.840 [2024-11-20 08:08:35.729224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.840 [2024-11-20 08:08:35.742586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.840 [2024-11-20 08:08:35.742604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.840 [2024-11-20 08:08:35.756675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.840 [2024-11-20 08:08:35.756692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.840 [2024-11-20 08:08:35.770724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.840 [2024-11-20 08:08:35.770742] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.840 [2024-11-20 08:08:35.784434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.840 [2024-11-20 08:08:35.784452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.840 [2024-11-20 08:08:35.798467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.840 [2024-11-20 08:08:35.798485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.840 [2024-11-20 08:08:35.812289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.840 [2024-11-20 08:08:35.812308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.840 [2024-11-20 08:08:35.826560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.840 [2024-11-20 08:08:35.826578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.840 [2024-11-20 08:08:35.840279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.840 [2024-11-20 08:08:35.840297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:21.840 [2024-11-20 08:08:35.854111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:21.840 [2024-11-20 08:08:35.854133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.098 [2024-11-20 08:08:35.867719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.098 [2024-11-20 08:08:35.867739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.098 [2024-11-20 08:08:35.881578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.098 [2024-11-20 08:08:35.881597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.098 [2024-11-20 08:08:35.894927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.098 [2024-11-20 08:08:35.894947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.098 [2024-11-20 08:08:35.908914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.098 [2024-11-20 08:08:35.908933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.098 [2024-11-20 08:08:35.922969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.098 [2024-11-20 08:08:35.922988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.098 [2024-11-20 08:08:35.936852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.098 [2024-11-20 08:08:35.936870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.098 [2024-11-20 08:08:35.950611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.098 [2024-11-20 08:08:35.950631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.098 [2024-11-20 08:08:35.964162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.098 [2024-11-20 08:08:35.964182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.098 [2024-11-20 08:08:35.977702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.098 [2024-11-20 08:08:35.977720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.098 [2024-11-20 08:08:35.991219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.098 [2024-11-20 08:08:35.991238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.098 [2024-11-20 08:08:36.005273] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.098 [2024-11-20 08:08:36.005291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.098 [2024-11-20 08:08:36.018696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.098 [2024-11-20 08:08:36.018714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.098 [2024-11-20 08:08:36.032518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.098 [2024-11-20 08:08:36.032536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.098 [2024-11-20 08:08:36.046363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.098 [2024-11-20 08:08:36.046382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.098 [2024-11-20 08:08:36.059924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.098 [2024-11-20 08:08:36.059942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.098 [2024-11-20 08:08:36.073880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.098 [2024-11-20 08:08:36.073899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.098 [2024-11-20 08:08:36.087600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.098 [2024-11-20 08:08:36.087619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.098 [2024-11-20 08:08:36.101349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.098 [2024-11-20 08:08:36.101368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.098 [2024-11-20 08:08:36.114980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.098 [2024-11-20 08:08:36.115002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.357 [2024-11-20 08:08:36.128922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.357 [2024-11-20 08:08:36.128942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.357 [2024-11-20 08:08:36.143115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.357 [2024-11-20 08:08:36.143135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.357 [2024-11-20 08:08:36.156915] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.357 [2024-11-20 08:08:36.156935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.357 [2024-11-20 08:08:36.170968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.357 [2024-11-20 08:08:36.170987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.357 [2024-11-20 08:08:36.185244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.357 [2024-11-20 08:08:36.185263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.357 [2024-11-20 08:08:36.198712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.357 [2024-11-20 08:08:36.198731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.357 [2024-11-20 08:08:36.212696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.357 [2024-11-20 08:08:36.212714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.357 [2024-11-20 08:08:36.226626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.357 [2024-11-20 08:08:36.226657] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.357 [2024-11-20 08:08:36.240856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.357 [2024-11-20 08:08:36.240880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.357 [2024-11-20 08:08:36.254454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.357 [2024-11-20 08:08:36.254472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.357 [2024-11-20 08:08:36.268658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.357 [2024-11-20 08:08:36.268676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.357 [2024-11-20 08:08:36.282193] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.357 [2024-11-20 08:08:36.282218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.357 [2024-11-20 08:08:36.296232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.357 [2024-11-20 08:08:36.296250] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.358 [2024-11-20 08:08:36.310034] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.358 [2024-11-20 08:08:36.310052] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.358 [2024-11-20 08:08:36.323463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.358 [2024-11-20 08:08:36.323482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.358 [2024-11-20 08:08:36.332595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.358 [2024-11-20 08:08:36.332612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.358 [2024-11-20 08:08:36.346789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.358 [2024-11-20 08:08:36.346809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.358 [2024-11-20 08:08:36.360276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.358 [2024-11-20 08:08:36.360295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.358 [2024-11-20 08:08:36.373791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.358 [2024-11-20 08:08:36.373814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.616 [2024-11-20 08:08:36.387894] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.616 [2024-11-20 08:08:36.387913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.616 [2024-11-20 08:08:36.401162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.616 [2024-11-20 08:08:36.401181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.616 [2024-11-20 08:08:36.415063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.616 [2024-11-20 08:08:36.415081] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.616 [2024-11-20 08:08:36.428577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.616 [2024-11-20 08:08:36.428595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.616 [2024-11-20 08:08:36.442526] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.616 [2024-11-20 08:08:36.442545] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.616 [2024-11-20 08:08:36.456053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.616 [2024-11-20 08:08:36.456072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.616 [2024-11-20 08:08:36.470015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.616 [2024-11-20 08:08:36.470034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.616 [2024-11-20 08:08:36.483415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.616 [2024-11-20 08:08:36.483434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.616 [2024-11-20 08:08:36.497624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.616 [2024-11-20 08:08:36.497643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.616 [2024-11-20 08:08:36.511344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.616 [2024-11-20 08:08:36.511363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.616 16870.00 IOPS, 131.80 MiB/s [2024-11-20T07:08:36.644Z] [2024-11-20 08:08:36.525108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.616 [2024-11-20 08:08:36.525127] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.616 [2024-11-20 08:08:36.538908] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.616 [2024-11-20 08:08:36.538926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.616 [2024-11-20 08:08:36.552477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.616 [2024-11-20 08:08:36.552495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.616 [2024-11-20 08:08:36.566038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.616 [2024-11-20 08:08:36.566056] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.616 [2024-11-20 08:08:36.579942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.616 [2024-11-20 08:08:36.579959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.616 [2024-11-20 08:08:36.593620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.616 [2024-11-20 08:08:36.593638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.616 [2024-11-20 08:08:36.607626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.616 [2024-11-20 08:08:36.607643] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.616 [2024-11-20 08:08:36.616836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.616 [2024-11-20 08:08:36.616853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.616 [2024-11-20 08:08:36.631198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.616 [2024-11-20 08:08:36.631222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.875 [2024-11-20 08:08:36.645303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.875 [2024-11-20 08:08:36.645331] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.875 [2024-11-20 08:08:36.656410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.875 [2024-11-20 08:08:36.656429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.875 [2024-11-20 08:08:36.671022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.875 [2024-11-20 08:08:36.671040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.875 [2024-11-20 08:08:36.684773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.875 [2024-11-20 08:08:36.684791] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.875 [2024-11-20 08:08:36.698859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.875 [2024-11-20 08:08:36.698877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.875 [2024-11-20 08:08:36.712300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.875 [2024-11-20 08:08:36.712320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.875 [2024-11-20 08:08:36.726569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.875 [2024-11-20 08:08:36.726587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.875 [2024-11-20 08:08:36.740174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.875 [2024-11-20 08:08:36.740191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.875 [2024-11-20 08:08:36.750217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.875 [2024-11-20 08:08:36.750235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.875 [2024-11-20 08:08:36.764268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.875 [2024-11-20 08:08:36.764286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.875 [2024-11-20 08:08:36.778102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.875 [2024-11-20 08:08:36.778120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.875 [2024-11-20 08:08:36.792108] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.875 [2024-11-20 08:08:36.792126] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.875 [2024-11-20 08:08:36.806037] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.875 [2024-11-20 08:08:36.806055] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.875 [2024-11-20 08:08:36.819947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.875 [2024-11-20 08:08:36.819964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.875 [2024-11-20 08:08:36.833289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.875 [2024-11-20 08:08:36.833308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.875 [2024-11-20 08:08:36.847225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.875 [2024-11-20 08:08:36.847243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.875 [2024-11-20 08:08:36.861460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.875 [2024-11-20 08:08:36.861478] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.875 [2024-11-20 08:08:36.872999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.875 [2024-11-20 08:08:36.873022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:22.875 [2024-11-20 08:08:36.887235] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:22.875 [2024-11-20 08:08:36.887252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.133 [2024-11-20 08:08:36.901378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.133 [2024-11-20 08:08:36.901399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.133 [2024-11-20 08:08:36.915357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.133 [2024-11-20 08:08:36.915377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.133 [2024-11-20 08:08:36.929510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.133 [2024-11-20 08:08:36.929529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.133 [2024-11-20 08:08:36.940606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.133 [2024-11-20 08:08:36.940623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.133 [2024-11-20 08:08:36.954739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.133 [2024-11-20 08:08:36.954756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.133 [2024-11-20 08:08:36.968295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.133 [2024-11-20 08:08:36.968314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.133 [2024-11-20 08:08:36.982224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.133 [2024-11-20 08:08:36.982242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.133 [2024-11-20 08:08:36.996350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.133 [2024-11-20 08:08:36.996369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.133 [2024-11-20 08:08:37.009986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.134 [2024-11-20 08:08:37.010004] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.134 [2024-11-20 08:08:37.024066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.134 [2024-11-20 08:08:37.024084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.134 [2024-11-20 08:08:37.038329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.134 [2024-11-20 08:08:37.038348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.134 [2024-11-20 08:08:37.049227] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.134 [2024-11-20 08:08:37.049245] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.134 [2024-11-20 08:08:37.063766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.134 [2024-11-20 08:08:37.063786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.134 [2024-11-20 08:08:37.077707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.134 [2024-11-20 08:08:37.077725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.134 [2024-11-20 08:08:37.091505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.134 [2024-11-20 08:08:37.091523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.134 [2024-11-20 08:08:37.105481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.134 [2024-11-20 08:08:37.105500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.134 [2024-11-20 08:08:37.119279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.134 [2024-11-20 08:08:37.119298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.134 [2024-11-20 08:08:37.133558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.134 [2024-11-20 08:08:37.133576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.134 [2024-11-20 08:08:37.147645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.134 [2024-11-20 08:08:37.147663] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.392 [2024-11-20 08:08:37.161585] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.392 [2024-11-20 08:08:37.161603] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.392 [2024-11-20 08:08:37.175549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.392 [2024-11-20 08:08:37.175568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.392 [2024-11-20 08:08:37.189666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.392 [2024-11-20 08:08:37.189684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.392 [2024-11-20 08:08:37.203374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.392 [2024-11-20 08:08:37.203393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.392 [2024-11-20 08:08:37.217212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.392 [2024-11-20 08:08:37.217230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.392 [2024-11-20 08:08:37.226736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.392 [2024-11-20 08:08:37.226754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.392 [2024-11-20 08:08:37.241087] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.392 [2024-11-20 08:08:37.241105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.392 [2024-11-20 08:08:37.255024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.392 [2024-11-20 08:08:37.255042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.392 [2024-11-20 08:08:37.268912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.392 [2024-11-20 08:08:37.268930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.392 [2024-11-20 08:08:37.282670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.392 [2024-11-20 08:08:37.282688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.392 [2024-11-20 08:08:37.296714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.392 [2024-11-20 08:08:37.296732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.392 [2024-11-20 08:08:37.310621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.392 [2024-11-20 08:08:37.310639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.392 [2024-11-20 08:08:37.320588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.392 [2024-11-20 08:08:37.320607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.392 [2024-11-20 08:08:37.334754] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.392 [2024-11-20 08:08:37.334773] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.392 [2024-11-20 08:08:37.344060] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.392 [2024-11-20 08:08:37.344078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.392 [2024-11-20 08:08:37.358024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.392 [2024-11-20 08:08:37.358041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.392 [2024-11-20 08:08:37.371556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.392 [2024-11-20 08:08:37.371574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.392 [2024-11-20 08:08:37.385470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.392 [2024-11-20 08:08:37.385488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.392 [2024-11-20 08:08:37.399583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.392 [2024-11-20 08:08:37.399602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.392 [2024-11-20 08:08:37.413757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.392 [2024-11-20 08:08:37.413776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.650 [2024-11-20 08:08:37.424989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.651 [2024-11-20 08:08:37.425007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.651 [2024-11-20 08:08:37.439246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.651 [2024-11-20 08:08:37.439264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.651 [2024-11-20 08:08:37.452917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.651 [2024-11-20 08:08:37.452935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.651 [2024-11-20 08:08:37.466644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.651 [2024-11-20 08:08:37.466662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.651 [2024-11-20 08:08:37.480226] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.651 [2024-11-20 08:08:37.480244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.651 [2024-11-20 08:08:37.493995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.651 [2024-11-20 08:08:37.494014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.651 [2024-11-20 08:08:37.507486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.651 [2024-11-20 08:08:37.507505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.651 16887.75 IOPS, 131.94 MiB/s [2024-11-20T07:08:37.679Z] [2024-11-20 08:08:37.521176] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.651 [2024-11-20 08:08:37.521195] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.651 [2024-11-20 08:08:37.535652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.651 [2024-11-20 08:08:37.535672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.651 [2024-11-20 08:08:37.546829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.651 [2024-11-20 08:08:37.546849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.651 [2024-11-20 08:08:37.560968] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.651 [2024-11-20 08:08:37.560987] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.651 [2024-11-20 08:08:37.574856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.651 [2024-11-20 08:08:37.574875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.651 [2024-11-20 08:08:37.589142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.651 [2024-11-20 08:08:37.589161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.651 [2024-11-20 08:08:37.599748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.651 [2024-11-20 08:08:37.599766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.651 [2024-11-20 08:08:37.614078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.651 [2024-11-20 08:08:37.614096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.651 [2024-11-20 08:08:37.628217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.651 [2024-11-20 08:08:37.628235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.651 [2024-11-20 08:08:37.639039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.651 [2024-11-20 08:08:37.639061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.651 [2024-11-20 08:08:37.653356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.651 [2024-11-20 08:08:37.653375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.651 [2024-11-20 08:08:37.663324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.651 [2024-11-20 08:08:37.663342] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.909 [2024-11-20 08:08:37.677438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.909 [2024-11-20 08:08:37.677458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.909 [2024-11-20 08:08:37.691385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.909 [2024-11-20 08:08:37.691404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.909 [2024-11-20 08:08:37.704891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.909 [2024-11-20 08:08:37.704910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.909 [2024-11-20 08:08:37.714327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.909 [2024-11-20 08:08:37.714346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.909 [2024-11-20 08:08:37.728378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.909 [2024-11-20 08:08:37.728397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.909 [2024-11-20 08:08:37.742215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.909 [2024-11-20 08:08:37.742234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.909 [2024-11-20 08:08:37.755533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.909 [2024-11-20 08:08:37.755551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.909 [2024-11-20 08:08:37.769359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.909 [2024-11-20 08:08:37.769378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.909 [2024-11-20 08:08:37.782885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.909 [2024-11-20 08:08:37.782903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.909 [2024-11-20 08:08:37.796570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.909 [2024-11-20 08:08:37.796588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.909 [2024-11-20 08:08:37.810083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.909 [2024-11-20 08:08:37.810102] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.909 [2024-11-20 08:08:37.823916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.909 [2024-11-20 08:08:37.823935] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.909 [2024-11-20 08:08:37.837659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.909 [2024-11-20 08:08:37.837678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.909 [2024-11-20 08:08:37.851932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.909 [2024-11-20 08:08:37.851950] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.909 [2024-11-20 08:08:37.867491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.909 [2024-11-20 08:08:37.867510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.909 [2024-11-20 08:08:37.876970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.909 [2024-11-20 08:08:37.876988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.909 [2024-11-20 08:08:37.891189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.909 [2024-11-20 08:08:37.891218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.909 [2024-11-20 08:08:37.905744] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.909 [2024-11-20 08:08:37.905763] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:23.909 [2024-11-20 08:08:37.921468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:23.909 [2024-11-20 08:08:37.921487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.168 [2024-11-20 08:08:37.935657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.168 [2024-11-20 08:08:37.935677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.168 [2024-11-20 08:08:37.949181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.168 [2024-11-20 08:08:37.949200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.168 [2024-11-20 08:08:37.963350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.168 [2024-11-20 08:08:37.963369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.168 [2024-11-20 08:08:37.977269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.168 [2024-11-20 08:08:37.977287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.168 [2024-11-20 08:08:37.991514] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.168 [2024-11-20 08:08:37.991532] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.168 [2024-11-20 08:08:38.005121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.168 [2024-11-20 08:08:38.005139] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.168 [2024-11-20 08:08:38.019222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.168 [2024-11-20 08:08:38.019241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.168 [2024-11-20 08:08:38.032717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.168 [2024-11-20 08:08:38.032735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.168 [2024-11-20 08:08:38.046399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.168 [2024-11-20 08:08:38.046417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.168 [2024-11-20 08:08:38.060322] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.168 [2024-11-20 08:08:38.060340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.168 [2024-11-20 08:08:38.074604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.168 [2024-11-20 08:08:38.074623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.168 [2024-11-20 08:08:38.088474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.168 [2024-11-20 08:08:38.088492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.168 [2024-11-20 08:08:38.101849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.168 [2024-11-20 08:08:38.101867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.168 [2024-11-20 08:08:38.116019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.168 [2024-11-20 08:08:38.116038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.168 [2024-11-20 08:08:38.126782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.168 [2024-11-20 08:08:38.126800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.168 [2024-11-20 08:08:38.140985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.168 [2024-11-20 08:08:38.141003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.168 [2024-11-20 08:08:38.154335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.168 [2024-11-20 08:08:38.154361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.168 [2024-11-20 08:08:38.167936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.168 [2024-11-20 08:08:38.167954] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.168 [2024-11-20 08:08:38.181750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.168 [2024-11-20 08:08:38.181768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.426 [2024-11-20 08:08:38.195586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.426 [2024-11-20 08:08:38.195605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.426 [2024-11-20 08:08:38.209635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.426 [2024-11-20 08:08:38.209654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.426 [2024-11-20 08:08:38.223522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.426 [2024-11-20 08:08:38.223540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.426 [2024-11-20 08:08:38.237435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.426 [2024-11-20 08:08:38.237464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.426 [2024-11-20 08:08:38.250996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.426 [2024-11-20 08:08:38.251014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.426 [2024-11-20 08:08:38.264693] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.426 [2024-11-20 08:08:38.264712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.426 [2024-11-20 08:08:38.278523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.426 [2024-11-20 08:08:38.278541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.426 [2024-11-20 08:08:38.292152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.426 [2024-11-20 08:08:38.292170] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.426 [2024-11-20 08:08:38.305752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.426 [2024-11-20 08:08:38.305770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.426 [2024-11-20 08:08:38.319972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.426 [2024-11-20 08:08:38.319990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.426 [2024-11-20 08:08:38.333770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.426 [2024-11-20 08:08:38.333788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.426 [2024-11-20 08:08:38.347780] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.426 [2024-11-20 08:08:38.347798] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.426 [2024-11-20 08:08:38.361861] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.426 [2024-11-20 08:08:38.361880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.426 [2024-11-20 08:08:38.375288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.427 [2024-11-20 08:08:38.375306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.427 [2024-11-20 08:08:38.389198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.427 [2024-11-20 08:08:38.389222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.427 [2024-11-20 08:08:38.402816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.427 [2024-11-20 08:08:38.402834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.427 [2024-11-20 08:08:38.417124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.427 [2024-11-20 08:08:38.417143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.427 [2024-11-20 08:08:38.428259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.427 [2024-11-20 08:08:38.428278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.427 [2024-11-20 08:08:38.442602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.427 [2024-11-20 08:08:38.442620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.685 [2024-11-20 08:08:38.456276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.685 [2024-11-20 08:08:38.456296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.685 [2024-11-20 08:08:38.469891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.685 [2024-11-20 08:08:38.469910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.685 [2024-11-20 08:08:38.483583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.685 [2024-11-20 08:08:38.483602] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.685 [2024-11-20 08:08:38.497286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.685 [2024-11-20 08:08:38.497305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.685 [2024-11-20 08:08:38.511634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.685 [2024-11-20 08:08:38.511652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.685 16886.20 IOPS, 131.92 MiB/s [2024-11-20T07:08:38.713Z] [2024-11-20 08:08:38.522323] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.685 [2024-11-20 08:08:38.522341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.685 00:11:24.685 Latency(us) 00:11:24.685 [2024-11-20T07:08:38.713Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:11:24.685 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:11:24.685 Nvme1n1 : 5.01 16890.23 131.95 0.00 0.00 7571.41 3448.44 15915.89 00:11:24.685 [2024-11-20T07:08:38.713Z] =================================================================================================================== 00:11:24.685 [2024-11-20T07:08:38.713Z] Total : 16890.23 131.95 0.00 0.00 7571.41 3448.44 15915.89 00:11:24.685 [2024-11-20 08:08:38.531858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.685 [2024-11-20 08:08:38.531875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.685 [2024-11-20 08:08:38.543886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.685 [2024-11-20 08:08:38.543900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.685 [2024-11-20 08:08:38.555927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.685 [2024-11-20 08:08:38.555945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.686 [2024-11-20 08:08:38.567951] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.686 [2024-11-20 08:08:38.567966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.686 [2024-11-20 08:08:38.579982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.686 [2024-11-20 08:08:38.579995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.686 [2024-11-20 08:08:38.592012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.686 [2024-11-20 08:08:38.592024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.686 [2024-11-20 08:08:38.604044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.686 [2024-11-20 08:08:38.604059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.686 [2024-11-20 08:08:38.616073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.686 [2024-11-20 08:08:38.616087] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.686 [2024-11-20 08:08:38.628103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.686 [2024-11-20 08:08:38.628117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.686 [2024-11-20 08:08:38.640132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.686 [2024-11-20 08:08:38.640142] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.686 [2024-11-20 08:08:38.652169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.686 [2024-11-20 08:08:38.652179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.686 [2024-11-20 08:08:38.664200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.686 [2024-11-20 08:08:38.664214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.686 [2024-11-20 08:08:38.676232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:11:24.686 [2024-11-20 08:08:38.676240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:24.686 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1573958) - No such process 00:11:24.686 08:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1573958 00:11:24.686 08:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:24.686 08:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.686 08:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:24.686 08:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.686 08:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:11:24.686 08:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.686 08:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:24.686 delay0 00:11:24.686 08:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.686 08:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:11:24.686 08:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.686 08:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:24.944 08:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.944 08:08:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:11:24.944 [2024-11-20 08:08:38.828124] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:11:31.507 Initializing NVMe Controllers 00:11:31.507 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:31.507 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:31.507 Initialization complete. Launching workers. 00:11:31.507 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 155 00:11:31.507 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 442, failed to submit 33 00:11:31.507 success 267, unsuccessful 175, failed 0 00:11:31.507 08:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:11:31.507 08:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:11:31.507 08:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:31.507 08:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:11:31.507 08:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:31.507 08:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:11:31.507 08:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:31.507 08:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:31.507 rmmod nvme_tcp 00:11:31.507 rmmod nvme_fabrics 00:11:31.507 rmmod nvme_keyring 00:11:31.507 08:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:31.507 08:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:11:31.507 08:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:11:31.507 08:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 1572102 ']' 00:11:31.507 08:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 1572102 00:11:31.507 08:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1572102 ']' 00:11:31.507 08:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1572102 00:11:31.507 08:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:11:31.507 08:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.507 08:08:44 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1572102 00:11:31.507 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:11:31.507 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:11:31.507 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1572102' 00:11:31.507 killing process with pid 1572102 00:11:31.507 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1572102 00:11:31.507 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1572102 00:11:31.507 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:31.507 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:11:31.507 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@254 -- # local dev 00:11:31.507 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@257 -- # remove_target_ns 00:11:31.507 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:31.507 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:31.507 08:08:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@258 -- # delete_main_bridge 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@121 -- # return 0 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@274 -- # iptr 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-save 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-restore 00:11:33.409 00:11:33.409 real 0m31.475s 00:11:33.409 user 0m41.953s 00:11:33.409 sys 0m11.074s 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:11:33.409 ************************************ 00:11:33.409 END TEST nvmf_zcopy 00:11:33.409 ************************************ 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:33.409 ************************************ 00:11:33.409 START TEST nvmf_nmic 00:11:33.409 ************************************ 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:11:33.409 * Looking for test storage... 00:11:33.409 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:11:33.409 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:33.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.669 --rc genhtml_branch_coverage=1 00:11:33.669 --rc genhtml_function_coverage=1 00:11:33.669 --rc genhtml_legend=1 00:11:33.669 --rc geninfo_all_blocks=1 00:11:33.669 --rc geninfo_unexecuted_blocks=1 00:11:33.669 00:11:33.669 ' 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:33.669 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.669 --rc genhtml_branch_coverage=1 00:11:33.669 --rc genhtml_function_coverage=1 00:11:33.669 --rc genhtml_legend=1 00:11:33.669 --rc geninfo_all_blocks=1 00:11:33.669 --rc geninfo_unexecuted_blocks=1 00:11:33.669 00:11:33.669 ' 00:11:33.669 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:33.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.670 --rc genhtml_branch_coverage=1 00:11:33.670 --rc genhtml_function_coverage=1 00:11:33.670 --rc genhtml_legend=1 00:11:33.670 --rc geninfo_all_blocks=1 00:11:33.670 --rc geninfo_unexecuted_blocks=1 00:11:33.670 00:11:33.670 ' 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:33.670 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:33.670 --rc genhtml_branch_coverage=1 00:11:33.670 --rc genhtml_function_coverage=1 00:11:33.670 --rc genhtml_legend=1 00:11:33.670 --rc geninfo_all_blocks=1 00:11:33.670 --rc geninfo_unexecuted_blocks=1 00:11:33.670 00:11:33.670 ' 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:33.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # xtrace_disable 00:11:33.670 08:08:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@131 -- # pci_devs=() 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@135 -- # net_devs=() 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@136 -- # e810=() 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@136 -- # local -ga e810 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@137 -- # x722=() 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@137 -- # local -ga x722 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@138 -- # mlx=() 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@138 -- # local -ga mlx 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:40.237 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:40.237 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:40.237 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:40.238 Found net devices under 0000:86:00.0: cvl_0_0 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:40.238 Found net devices under 0000:86:00.1: cvl_0_1 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # is_hw=yes 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@247 -- # create_target_ns 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:40.238 10.0.0.1 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:40.238 10.0.0.2 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:40.238 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:40.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:40.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.398 ms 00:11:40.239 00:11:40.239 --- 10.0.0.1 ping statistics --- 00:11:40.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.239 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:11:40.239 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:40.239 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:11:40.239 00:11:40.239 --- 10.0.0.2 ping statistics --- 00:11:40.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:40.239 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair++ )) 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # return 0 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # return 1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev= 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@160 -- # return 0 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # return 1 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@159 -- # dev= 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@160 -- # return 0 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:11:40.239 ' 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:40.239 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:40.240 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:40.240 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:40.240 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:40.240 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:11:40.240 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:40.240 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:40.240 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:40.240 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=1579593 00:11:40.240 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 1579593 00:11:40.240 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:40.240 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1579593 ']' 00:11:40.240 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:40.240 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:40.240 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:40.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:40.240 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:40.240 08:08:53 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:40.240 [2024-11-20 08:08:53.710305] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:11:40.240 [2024-11-20 08:08:53.710355] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:40.240 [2024-11-20 08:08:53.789361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:40.240 [2024-11-20 08:08:53.832338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:40.240 [2024-11-20 08:08:53.832375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:40.240 [2024-11-20 08:08:53.832382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:40.240 [2024-11-20 08:08:53.832388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:40.240 [2024-11-20 08:08:53.832393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:40.240 [2024-11-20 08:08:53.833912] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:40.240 [2024-11-20 08:08:53.834021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:40.240 [2024-11-20 08:08:53.834131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:40.240 [2024-11-20 08:08:53.834131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:40.807 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:40.807 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:11:40.807 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:40.807 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:40.807 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:40.807 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:40.807 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:40.807 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.807 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:40.807 [2024-11-20 08:08:54.608958] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:40.807 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.807 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:40.808 Malloc0 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:40.808 [2024-11-20 08:08:54.670844] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:11:40.808 test case1: single bdev can't be used in multiple subsystems 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:40.808 [2024-11-20 08:08:54.698751] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:11:40.808 [2024-11-20 08:08:54.698769] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:11:40.808 [2024-11-20 08:08:54.698776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:11:40.808 request: 00:11:40.808 { 00:11:40.808 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:40.808 "namespace": { 00:11:40.808 "bdev_name": "Malloc0", 00:11:40.808 "no_auto_visible": false 00:11:40.808 }, 00:11:40.808 "method": "nvmf_subsystem_add_ns", 00:11:40.808 "req_id": 1 00:11:40.808 } 00:11:40.808 Got JSON-RPC error response 00:11:40.808 response: 00:11:40.808 { 00:11:40.808 "code": -32602, 00:11:40.808 "message": "Invalid parameters" 00:11:40.808 } 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:11:40.808 Adding namespace failed - expected result. 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:11:40.808 test case2: host connect to nvmf target in multiple paths 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:40.808 [2024-11-20 08:08:54.710877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:40.808 08:08:54 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.186 08:08:55 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:11:43.123 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:11:43.123 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:11:43.123 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:43.123 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:43.123 08:08:57 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:11:45.657 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:45.657 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:45.657 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:45.657 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:45.657 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:45.657 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:11:45.657 08:08:59 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:11:45.657 [global] 00:11:45.657 thread=1 00:11:45.657 invalidate=1 00:11:45.657 rw=write 00:11:45.657 time_based=1 00:11:45.657 runtime=1 00:11:45.657 ioengine=libaio 00:11:45.657 direct=1 00:11:45.657 bs=4096 00:11:45.657 iodepth=1 00:11:45.657 norandommap=0 00:11:45.657 numjobs=1 00:11:45.657 00:11:45.657 verify_dump=1 00:11:45.657 verify_backlog=512 00:11:45.657 verify_state_save=0 00:11:45.657 do_verify=1 00:11:45.657 verify=crc32c-intel 00:11:45.657 [job0] 00:11:45.657 filename=/dev/nvme0n1 00:11:45.657 Could not set queue depth (nvme0n1) 00:11:45.657 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:11:45.657 fio-3.35 00:11:45.657 Starting 1 thread 00:11:46.594 00:11:46.594 job0: (groupid=0, jobs=1): err= 0: pid=1580674: Wed Nov 20 08:09:00 2024 00:11:46.594 read: IOPS=684, BW=2740KiB/s (2806kB/s)(2792KiB/1019msec) 00:11:46.594 slat (nsec): min=6439, max=27717, avg=7588.41, stdev=2433.91 00:11:46.594 clat (usec): min=178, max=41868, avg=1221.96, stdev=6292.03 00:11:46.594 lat (usec): min=185, max=41890, avg=1229.55, stdev=6294.10 00:11:46.594 clat percentiles (usec): 00:11:46.594 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 198], 00:11:46.594 | 30.00th=[ 202], 40.00th=[ 212], 50.00th=[ 239], 60.00th=[ 243], 00:11:46.594 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 265], 95.00th=[ 289], 00:11:46.594 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:11:46.594 | 99.99th=[41681] 00:11:46.594 write: IOPS=1004, BW=4020KiB/s (4116kB/s)(4096KiB/1019msec); 0 zone resets 00:11:46.594 slat (nsec): min=9112, max=38892, avg=10147.03, stdev=1529.37 00:11:46.594 clat (usec): min=123, max=429, avg=142.48, stdev=12.14 00:11:46.594 lat (usec): min=133, max=468, avg=152.63, stdev=12.88 00:11:46.594 clat percentiles (usec): 00:11:46.594 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 139], 00:11:46.594 | 30.00th=[ 139], 40.00th=[ 141], 50.00th=[ 141], 60.00th=[ 143], 00:11:46.594 | 70.00th=[ 145], 80.00th=[ 147], 90.00th=[ 151], 95.00th=[ 155], 00:11:46.594 | 99.00th=[ 174], 99.50th=[ 186], 99.90th=[ 237], 99.95th=[ 429], 00:11:46.594 | 99.99th=[ 429] 00:11:46.594 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:11:46.594 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:11:46.594 lat (usec) : 250=89.84%, 500=9.18% 00:11:46.594 lat (msec) : 50=0.99% 00:11:46.594 cpu : usr=0.69%, sys=1.67%, ctx=1722, majf=0, minf=1 00:11:46.594 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:11:46.594 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.594 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:11:46.594 issued rwts: total=698,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:11:46.594 latency : target=0, window=0, percentile=100.00%, depth=1 00:11:46.594 00:11:46.594 Run status group 0 (all jobs): 00:11:46.594 READ: bw=2740KiB/s (2806kB/s), 2740KiB/s-2740KiB/s (2806kB/s-2806kB/s), io=2792KiB (2859kB), run=1019-1019msec 00:11:46.594 WRITE: bw=4020KiB/s (4116kB/s), 4020KiB/s-4020KiB/s (4116kB/s-4116kB/s), io=4096KiB (4194kB), run=1019-1019msec 00:11:46.594 00:11:46.594 Disk stats (read/write): 00:11:46.594 nvme0n1: ios=745/1024, merge=0/0, ticks=746/136, in_queue=882, util=91.38% 00:11:46.594 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:46.853 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:46.853 rmmod nvme_tcp 00:11:46.853 rmmod nvme_fabrics 00:11:46.853 rmmod nvme_keyring 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 1579593 ']' 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 1579593 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1579593 ']' 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1579593 00:11:46.853 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:11:47.112 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.112 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1579593 00:11:47.112 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.112 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.112 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1579593' 00:11:47.112 killing process with pid 1579593 00:11:47.112 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1579593 00:11:47.112 08:09:00 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1579593 00:11:47.112 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:47.112 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:11:47.112 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@254 -- # local dev 00:11:47.112 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@257 -- # remove_target_ns 00:11:47.112 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:47.112 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:47.112 08:09:01 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@258 -- # delete_main_bridge 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@121 -- # return 0 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@274 -- # iptr 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-save 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-restore 00:11:49.649 00:11:49.649 real 0m15.867s 00:11:49.649 user 0m36.297s 00:11:49.649 sys 0m5.467s 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:11:49.649 ************************************ 00:11:49.649 END TEST nvmf_nmic 00:11:49.649 ************************************ 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:49.649 ************************************ 00:11:49.649 START TEST nvmf_fio_target 00:11:49.649 ************************************ 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:11:49.649 * Looking for test storage... 00:11:49.649 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:49.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.649 --rc genhtml_branch_coverage=1 00:11:49.649 --rc genhtml_function_coverage=1 00:11:49.649 --rc genhtml_legend=1 00:11:49.649 --rc geninfo_all_blocks=1 00:11:49.649 --rc geninfo_unexecuted_blocks=1 00:11:49.649 00:11:49.649 ' 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:49.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.649 --rc genhtml_branch_coverage=1 00:11:49.649 --rc genhtml_function_coverage=1 00:11:49.649 --rc genhtml_legend=1 00:11:49.649 --rc geninfo_all_blocks=1 00:11:49.649 --rc geninfo_unexecuted_blocks=1 00:11:49.649 00:11:49.649 ' 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:49.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.649 --rc genhtml_branch_coverage=1 00:11:49.649 --rc genhtml_function_coverage=1 00:11:49.649 --rc genhtml_legend=1 00:11:49.649 --rc geninfo_all_blocks=1 00:11:49.649 --rc geninfo_unexecuted_blocks=1 00:11:49.649 00:11:49.649 ' 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:49.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:49.649 --rc genhtml_branch_coverage=1 00:11:49.649 --rc genhtml_function_coverage=1 00:11:49.649 --rc genhtml_legend=1 00:11:49.649 --rc geninfo_all_blocks=1 00:11:49.649 --rc geninfo_unexecuted_blocks=1 00:11:49.649 00:11:49.649 ' 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:49.649 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:49.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # xtrace_disable 00:11:49.650 08:09:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@131 -- # pci_devs=() 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@135 -- # net_devs=() 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@136 -- # e810=() 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@136 -- # local -ga e810 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@137 -- # x722=() 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@137 -- # local -ga x722 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@138 -- # mlx=() 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@138 -- # local -ga mlx 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:56.223 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:56.223 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:56.223 Found net devices under 0000:86:00.0: cvl_0_0 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:56.223 Found net devices under 0000:86:00.1: cvl_0_1 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # is_hw=yes 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@247 -- # create_target_ns 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:11:56.223 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:56.224 10.0.0.1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:56.224 10.0.0.2 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:56.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:56.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:11:56.224 00:11:56.224 --- 10.0.0.1 ping statistics --- 00:11:56.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.224 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:11:56.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:56.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:11:56.224 00:11:56.224 --- 10.0.0.2 ping statistics --- 00:11:56.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:56.224 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # return 0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # return 1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev= 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@160 -- # return 0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # return 1 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev= 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@160 -- # return 0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:11:56.224 ' 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=1584981 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 1584981 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1584981 ']' 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:56.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.224 [2024-11-20 08:09:09.678948] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:11:56.224 [2024-11-20 08:09:09.678989] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:56.224 [2024-11-20 08:09:09.758048] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:56.224 [2024-11-20 08:09:09.800562] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:56.224 [2024-11-20 08:09:09.800599] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:56.224 [2024-11-20 08:09:09.800606] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:56.224 [2024-11-20 08:09:09.800612] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:56.224 [2024-11-20 08:09:09.800617] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:56.224 [2024-11-20 08:09:09.802090] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:56.224 [2024-11-20 08:09:09.802253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:56.224 [2024-11-20 08:09:09.802307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.224 [2024-11-20 08:09:09.802307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:56.224 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:56.225 08:09:09 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:56.225 [2024-11-20 08:09:10.127896] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:56.225 08:09:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:56.483 08:09:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:11:56.483 08:09:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:56.741 08:09:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:11:56.741 08:09:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.000 08:09:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:11:57.001 08:09:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.001 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:11:57.001 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:11:57.259 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.517 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:11:57.517 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:57.776 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:11:57.776 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:11:58.035 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:11:58.035 08:09:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:11:58.035 08:09:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:58.294 08:09:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:58.294 08:09:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:58.552 08:09:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:11:58.552 08:09:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:58.810 08:09:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:58.810 [2024-11-20 08:09:12.812857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:59.068 08:09:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:11:59.068 08:09:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:11:59.326 08:09:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:00.705 08:09:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:12:00.705 08:09:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:12:00.705 08:09:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:00.705 08:09:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:12:00.705 08:09:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:12:00.705 08:09:14 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:12:02.766 08:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:02.766 08:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:02.766 08:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:02.766 08:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:12:02.766 08:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:02.766 08:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:12:02.766 08:09:16 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:12:02.766 [global] 00:12:02.766 thread=1 00:12:02.766 invalidate=1 00:12:02.766 rw=write 00:12:02.766 time_based=1 00:12:02.766 runtime=1 00:12:02.766 ioengine=libaio 00:12:02.766 direct=1 00:12:02.766 bs=4096 00:12:02.766 iodepth=1 00:12:02.766 norandommap=0 00:12:02.766 numjobs=1 00:12:02.766 00:12:02.766 verify_dump=1 00:12:02.766 verify_backlog=512 00:12:02.766 verify_state_save=0 00:12:02.766 do_verify=1 00:12:02.766 verify=crc32c-intel 00:12:02.766 [job0] 00:12:02.766 filename=/dev/nvme0n1 00:12:02.766 [job1] 00:12:02.766 filename=/dev/nvme0n2 00:12:02.766 [job2] 00:12:02.766 filename=/dev/nvme0n3 00:12:02.766 [job3] 00:12:02.766 filename=/dev/nvme0n4 00:12:02.766 Could not set queue depth (nvme0n1) 00:12:02.766 Could not set queue depth (nvme0n2) 00:12:02.766 Could not set queue depth (nvme0n3) 00:12:02.766 Could not set queue depth (nvme0n4) 00:12:03.023 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:03.023 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:03.023 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:03.023 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:03.023 fio-3.35 00:12:03.023 Starting 4 threads 00:12:04.391 00:12:04.391 job0: (groupid=0, jobs=1): err= 0: pid=1586350: Wed Nov 20 08:09:18 2024 00:12:04.391 read: IOPS=21, BW=87.9KiB/s (90.0kB/s)(88.0KiB/1001msec) 00:12:04.391 slat (nsec): min=10593, max=24534, avg=23293.45, stdev=2846.82 00:12:04.391 clat (usec): min=40553, max=41969, avg=41261.11, stdev=493.61 00:12:04.391 lat (usec): min=40564, max=41993, avg=41284.40, stdev=494.60 00:12:04.391 clat percentiles (usec): 00:12:04.391 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:12:04.391 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:04.391 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:12:04.391 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:04.391 | 99.99th=[42206] 00:12:04.391 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:12:04.391 slat (nsec): min=10356, max=86341, avg=11998.18, stdev=3916.15 00:12:04.391 clat (usec): min=136, max=266, avg=165.15, stdev=13.61 00:12:04.391 lat (usec): min=147, max=352, avg=177.14, stdev=15.22 00:12:04.391 clat percentiles (usec): 00:12:04.391 | 1.00th=[ 141], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:12:04.391 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 163], 60.00th=[ 167], 00:12:04.391 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 188], 00:12:04.391 | 99.00th=[ 200], 99.50th=[ 208], 99.90th=[ 265], 99.95th=[ 265], 00:12:04.391 | 99.99th=[ 265] 00:12:04.391 bw ( KiB/s): min= 4096, max= 4096, per=12.86%, avg=4096.00, stdev= 0.00, samples=1 00:12:04.391 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:12:04.391 lat (usec) : 250=95.51%, 500=0.37% 00:12:04.391 lat (msec) : 50=4.12% 00:12:04.391 cpu : usr=0.40%, sys=0.90%, ctx=534, majf=0, minf=1 00:12:04.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.391 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:04.391 job1: (groupid=0, jobs=1): err= 0: pid=1586368: Wed Nov 20 08:09:18 2024 00:12:04.391 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:12:04.391 slat (nsec): min=7097, max=24541, avg=8307.95, stdev=1129.82 00:12:04.391 clat (usec): min=194, max=425, avg=241.18, stdev=17.89 00:12:04.391 lat (usec): min=202, max=433, avg=249.49, stdev=17.87 00:12:04.391 clat percentiles (usec): 00:12:04.391 | 1.00th=[ 204], 5.00th=[ 215], 10.00th=[ 221], 20.00th=[ 229], 00:12:04.391 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 245], 00:12:04.391 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 260], 95.00th=[ 265], 00:12:04.391 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 404], 99.95th=[ 412], 00:12:04.391 | 99.99th=[ 424] 00:12:04.391 write: IOPS=2338, BW=9355KiB/s (9579kB/s)(9364KiB/1001msec); 0 zone resets 00:12:04.391 slat (usec): min=10, max=40733, avg=38.23, stdev=948.35 00:12:04.391 clat (usec): min=118, max=1231, avg=165.11, stdev=29.09 00:12:04.391 lat (usec): min=129, max=40943, avg=203.35, stdev=950.47 00:12:04.391 clat percentiles (usec): 00:12:04.391 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 151], 00:12:04.391 | 30.00th=[ 155], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:12:04.391 | 70.00th=[ 169], 80.00th=[ 178], 90.00th=[ 190], 95.00th=[ 200], 00:12:04.391 | 99.00th=[ 227], 99.50th=[ 265], 99.90th=[ 277], 99.95th=[ 289], 00:12:04.391 | 99.99th=[ 1237] 00:12:04.391 bw ( KiB/s): min= 8312, max= 8312, per=26.09%, avg=8312.00, stdev= 0.00, samples=1 00:12:04.391 iops : min= 2078, max= 2078, avg=2078.00, stdev= 0.00, samples=1 00:12:04.391 lat (usec) : 250=86.35%, 500=13.62% 00:12:04.391 lat (msec) : 2=0.02% 00:12:04.391 cpu : usr=3.40%, sys=7.40%, ctx=4393, majf=0, minf=1 00:12:04.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.391 issued rwts: total=2048,2341,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:04.391 job2: (groupid=0, jobs=1): err= 0: pid=1586401: Wed Nov 20 08:09:18 2024 00:12:04.391 read: IOPS=2218, BW=8875KiB/s (9088kB/s)(8884KiB/1001msec) 00:12:04.391 slat (nsec): min=7015, max=39775, avg=8318.01, stdev=1345.67 00:12:04.391 clat (usec): min=189, max=400, avg=234.46, stdev=16.37 00:12:04.391 lat (usec): min=196, max=409, avg=242.78, stdev=16.36 00:12:04.391 clat percentiles (usec): 00:12:04.391 | 1.00th=[ 200], 5.00th=[ 208], 10.00th=[ 212], 20.00th=[ 221], 00:12:04.391 | 30.00th=[ 227], 40.00th=[ 231], 50.00th=[ 235], 60.00th=[ 239], 00:12:04.391 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 260], 00:12:04.391 | 99.00th=[ 273], 99.50th=[ 273], 99.90th=[ 281], 99.95th=[ 289], 00:12:04.391 | 99.99th=[ 400] 00:12:04.391 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:04.391 slat (nsec): min=10202, max=61727, avg=11489.06, stdev=1614.04 00:12:04.391 clat (usec): min=125, max=297, avg=162.91, stdev=13.32 00:12:04.391 lat (usec): min=136, max=358, avg=174.40, stdev=13.72 00:12:04.391 clat percentiles (usec): 00:12:04.391 | 1.00th=[ 139], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:12:04.391 | 30.00th=[ 155], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:12:04.391 | 70.00th=[ 169], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 186], 00:12:04.391 | 99.00th=[ 200], 99.50th=[ 206], 99.90th=[ 273], 99.95th=[ 293], 00:12:04.391 | 99.99th=[ 297] 00:12:04.391 bw ( KiB/s): min=11248, max=11248, per=35.30%, avg=11248.00, stdev= 0.00, samples=1 00:12:04.391 iops : min= 2812, max= 2812, avg=2812.00, stdev= 0.00, samples=1 00:12:04.391 lat (usec) : 250=91.88%, 500=8.12% 00:12:04.391 cpu : usr=3.90%, sys=7.70%, ctx=4781, majf=0, minf=1 00:12:04.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.391 issued rwts: total=2221,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:04.391 job3: (groupid=0, jobs=1): err= 0: pid=1586412: Wed Nov 20 08:09:18 2024 00:12:04.391 read: IOPS=2104, BW=8420KiB/s (8622kB/s)(8428KiB/1001msec) 00:12:04.391 slat (nsec): min=7136, max=28464, avg=8179.83, stdev=1205.37 00:12:04.391 clat (usec): min=179, max=488, avg=246.77, stdev=40.25 00:12:04.391 lat (usec): min=187, max=496, avg=254.95, stdev=40.26 00:12:04.391 clat percentiles (usec): 00:12:04.391 | 1.00th=[ 194], 5.00th=[ 204], 10.00th=[ 210], 20.00th=[ 219], 00:12:04.391 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 243], 60.00th=[ 249], 00:12:04.391 | 70.00th=[ 258], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 289], 00:12:04.391 | 99.00th=[ 437], 99.50th=[ 445], 99.90th=[ 457], 99.95th=[ 474], 00:12:04.391 | 99.99th=[ 490] 00:12:04.391 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:12:04.391 slat (nsec): min=10237, max=41026, avg=11472.56, stdev=1797.53 00:12:04.391 clat (usec): min=129, max=421, avg=164.15, stdev=15.11 00:12:04.391 lat (usec): min=140, max=433, avg=175.62, stdev=15.45 00:12:04.391 clat percentiles (usec): 00:12:04.391 | 1.00th=[ 141], 5.00th=[ 145], 10.00th=[ 149], 20.00th=[ 153], 00:12:04.391 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 163], 60.00th=[ 165], 00:12:04.391 | 70.00th=[ 169], 80.00th=[ 176], 90.00th=[ 182], 95.00th=[ 188], 00:12:04.391 | 99.00th=[ 202], 99.50th=[ 212], 99.90th=[ 306], 99.95th=[ 392], 00:12:04.391 | 99.99th=[ 420] 00:12:04.391 bw ( KiB/s): min= 9792, max= 9792, per=30.73%, avg=9792.00, stdev= 0.00, samples=1 00:12:04.391 iops : min= 2448, max= 2448, avg=2448.00, stdev= 0.00, samples=1 00:12:04.391 lat (usec) : 250=82.71%, 500=17.29% 00:12:04.391 cpu : usr=3.80%, sys=7.40%, ctx=4667, majf=0, minf=1 00:12:04.391 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:04.391 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.391 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:04.391 issued rwts: total=2107,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:04.391 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:04.391 00:12:04.391 Run status group 0 (all jobs): 00:12:04.391 READ: bw=25.0MiB/s (26.2MB/s), 87.9KiB/s-8875KiB/s (90.0kB/s-9088kB/s), io=25.0MiB (26.2MB), run=1001-1001msec 00:12:04.391 WRITE: bw=31.1MiB/s (32.6MB/s), 2046KiB/s-9.99MiB/s (2095kB/s-10.5MB/s), io=31.1MiB (32.7MB), run=1001-1001msec 00:12:04.391 00:12:04.391 Disk stats (read/write): 00:12:04.391 nvme0n1: ios=66/512, merge=0/0, ticks=747/82, in_queue=829, util=86.16% 00:12:04.391 nvme0n2: ios=1559/2028, merge=0/0, ticks=1257/299, in_queue=1556, util=91.03% 00:12:04.391 nvme0n3: ios=1847/2048, merge=0/0, ticks=481/307, in_queue=788, util=89.57% 00:12:04.391 nvme0n4: ios=1713/2048, merge=0/0, ticks=472/311, in_queue=783, util=94.25% 00:12:04.391 08:09:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:12:04.391 [global] 00:12:04.391 thread=1 00:12:04.391 invalidate=1 00:12:04.391 rw=randwrite 00:12:04.391 time_based=1 00:12:04.391 runtime=1 00:12:04.391 ioengine=libaio 00:12:04.391 direct=1 00:12:04.391 bs=4096 00:12:04.391 iodepth=1 00:12:04.391 norandommap=0 00:12:04.391 numjobs=1 00:12:04.391 00:12:04.391 verify_dump=1 00:12:04.391 verify_backlog=512 00:12:04.391 verify_state_save=0 00:12:04.391 do_verify=1 00:12:04.391 verify=crc32c-intel 00:12:04.391 [job0] 00:12:04.391 filename=/dev/nvme0n1 00:12:04.391 [job1] 00:12:04.391 filename=/dev/nvme0n2 00:12:04.391 [job2] 00:12:04.391 filename=/dev/nvme0n3 00:12:04.391 [job3] 00:12:04.391 filename=/dev/nvme0n4 00:12:04.391 Could not set queue depth (nvme0n1) 00:12:04.391 Could not set queue depth (nvme0n2) 00:12:04.391 Could not set queue depth (nvme0n3) 00:12:04.391 Could not set queue depth (nvme0n4) 00:12:04.647 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:04.647 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:04.647 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:04.647 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:04.647 fio-3.35 00:12:04.647 Starting 4 threads 00:12:06.013 00:12:06.013 job0: (groupid=0, jobs=1): err= 0: pid=1586835: Wed Nov 20 08:09:19 2024 00:12:06.013 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:12:06.013 slat (nsec): min=10021, max=23056, avg=22049.05, stdev=2694.78 00:12:06.013 clat (usec): min=40662, max=41981, avg=41045.86, stdev=309.46 00:12:06.013 lat (usec): min=40672, max=42003, avg=41067.91, stdev=310.15 00:12:06.013 clat percentiles (usec): 00:12:06.013 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:12:06.013 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:06.013 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:12:06.013 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:06.013 | 99.99th=[42206] 00:12:06.013 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:12:06.013 slat (nsec): min=9026, max=38013, avg=10428.16, stdev=2185.70 00:12:06.013 clat (usec): min=133, max=411, avg=185.53, stdev=22.60 00:12:06.013 lat (usec): min=143, max=442, avg=195.95, stdev=23.43 00:12:06.013 clat percentiles (usec): 00:12:06.013 | 1.00th=[ 145], 5.00th=[ 157], 10.00th=[ 165], 20.00th=[ 172], 00:12:06.013 | 30.00th=[ 176], 40.00th=[ 180], 50.00th=[ 184], 60.00th=[ 190], 00:12:06.013 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 210], 00:12:06.014 | 99.00th=[ 241], 99.50th=[ 334], 99.90th=[ 412], 99.95th=[ 412], 00:12:06.014 | 99.99th=[ 412] 00:12:06.014 bw ( KiB/s): min= 4087, max= 4087, per=51.74%, avg=4087.00, stdev= 0.00, samples=1 00:12:06.014 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:12:06.014 lat (usec) : 250=94.94%, 500=0.94% 00:12:06.014 lat (msec) : 50=4.12% 00:12:06.014 cpu : usr=0.50%, sys=0.20%, ctx=534, majf=0, minf=1 00:12:06.014 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:06.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.014 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.014 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:06.014 job1: (groupid=0, jobs=1): err= 0: pid=1586851: Wed Nov 20 08:09:19 2024 00:12:06.014 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:12:06.014 slat (nsec): min=9729, max=24596, avg=22575.14, stdev=2976.84 00:12:06.014 clat (usec): min=40797, max=41951, avg=41020.81, stdev=220.21 00:12:06.014 lat (usec): min=40821, max=41974, avg=41043.39, stdev=220.19 00:12:06.014 clat percentiles (usec): 00:12:06.014 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:12:06.014 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:06.014 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:06.014 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:06.014 | 99.99th=[42206] 00:12:06.014 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:12:06.014 slat (nsec): min=10425, max=35857, avg=12167.26, stdev=2444.61 00:12:06.014 clat (usec): min=140, max=350, avg=177.70, stdev=19.56 00:12:06.014 lat (usec): min=152, max=370, avg=189.87, stdev=19.86 00:12:06.014 clat percentiles (usec): 00:12:06.014 | 1.00th=[ 147], 5.00th=[ 153], 10.00th=[ 157], 20.00th=[ 163], 00:12:06.014 | 30.00th=[ 167], 40.00th=[ 174], 50.00th=[ 176], 60.00th=[ 180], 00:12:06.014 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 204], 00:12:06.014 | 99.00th=[ 229], 99.50th=[ 255], 99.90th=[ 351], 99.95th=[ 351], 00:12:06.014 | 99.99th=[ 351] 00:12:06.014 bw ( KiB/s): min= 4087, max= 4087, per=51.74%, avg=4087.00, stdev= 0.00, samples=1 00:12:06.014 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:12:06.014 lat (usec) : 250=95.32%, 500=0.56% 00:12:06.014 lat (msec) : 50=4.12% 00:12:06.014 cpu : usr=0.40%, sys=1.00%, ctx=535, majf=0, minf=1 00:12:06.014 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:06.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.014 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.014 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:06.014 job2: (groupid=0, jobs=1): err= 0: pid=1586872: Wed Nov 20 08:09:19 2024 00:12:06.014 read: IOPS=23, BW=92.6KiB/s (94.8kB/s)(96.0KiB/1037msec) 00:12:06.014 slat (nsec): min=9260, max=25386, avg=20936.79, stdev=3844.15 00:12:06.014 clat (usec): min=383, max=41403, avg=39276.07, stdev=8285.20 00:12:06.014 lat (usec): min=409, max=41413, avg=39297.01, stdev=8284.25 00:12:06.014 clat percentiles (usec): 00:12:06.014 | 1.00th=[ 383], 5.00th=[40633], 10.00th=[40633], 20.00th=[40633], 00:12:06.014 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:06.014 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:06.014 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:12:06.014 | 99.99th=[41157] 00:12:06.014 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:12:06.014 slat (nsec): min=9008, max=36170, avg=10270.14, stdev=2131.31 00:12:06.014 clat (usec): min=138, max=341, avg=170.62, stdev=22.83 00:12:06.014 lat (usec): min=148, max=377, avg=180.89, stdev=23.59 00:12:06.014 clat percentiles (usec): 00:12:06.014 | 1.00th=[ 143], 5.00th=[ 147], 10.00th=[ 151], 20.00th=[ 155], 00:12:06.014 | 30.00th=[ 159], 40.00th=[ 161], 50.00th=[ 165], 60.00th=[ 169], 00:12:06.014 | 70.00th=[ 176], 80.00th=[ 188], 90.00th=[ 200], 95.00th=[ 208], 00:12:06.014 | 99.00th=[ 243], 99.50th=[ 310], 99.90th=[ 343], 99.95th=[ 343], 00:12:06.014 | 99.99th=[ 343] 00:12:06.014 bw ( KiB/s): min= 4087, max= 4087, per=51.74%, avg=4087.00, stdev= 0.00, samples=1 00:12:06.014 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:12:06.014 lat (usec) : 250=94.59%, 500=1.12% 00:12:06.014 lat (msec) : 50=4.29% 00:12:06.014 cpu : usr=0.29%, sys=0.39%, ctx=536, majf=0, minf=1 00:12:06.014 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:06.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.014 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.014 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:06.014 job3: (groupid=0, jobs=1): err= 0: pid=1586878: Wed Nov 20 08:09:19 2024 00:12:06.014 read: IOPS=21, BW=87.4KiB/s (89.5kB/s)(88.0KiB/1007msec) 00:12:06.014 slat (nsec): min=10885, max=25933, avg=24613.86, stdev=3081.24 00:12:06.014 clat (usec): min=40858, max=41963, avg=41021.10, stdev=219.08 00:12:06.014 lat (usec): min=40883, max=41988, avg=41045.71, stdev=218.82 00:12:06.014 clat percentiles (usec): 00:12:06.014 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:12:06.014 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:06.014 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:06.014 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:06.014 | 99.99th=[42206] 00:12:06.014 write: IOPS=508, BW=2034KiB/s (2083kB/s)(2048KiB/1007msec); 0 zone resets 00:12:06.014 slat (usec): min=7, max=665, avg=12.94, stdev=28.98 00:12:06.014 clat (usec): min=136, max=637, avg=186.48, stdev=36.20 00:12:06.014 lat (usec): min=147, max=841, avg=199.42, stdev=46.28 00:12:06.014 clat percentiles (usec): 00:12:06.014 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 165], 20.00th=[ 169], 00:12:06.014 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 188], 00:12:06.014 | 70.00th=[ 192], 80.00th=[ 198], 90.00th=[ 204], 95.00th=[ 215], 00:12:06.014 | 99.00th=[ 310], 99.50th=[ 523], 99.90th=[ 635], 99.95th=[ 635], 00:12:06.014 | 99.99th=[ 635] 00:12:06.014 bw ( KiB/s): min= 4087, max= 4087, per=51.74%, avg=4087.00, stdev= 0.00, samples=1 00:12:06.014 iops : min= 1021, max= 1021, avg=1021.00, stdev= 0.00, samples=1 00:12:06.014 lat (usec) : 250=94.57%, 500=0.75%, 750=0.56% 00:12:06.014 lat (msec) : 50=4.12% 00:12:06.014 cpu : usr=0.70%, sys=0.70%, ctx=536, majf=0, minf=1 00:12:06.014 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:06.014 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.014 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:06.014 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:06.014 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:06.014 00:12:06.014 Run status group 0 (all jobs): 00:12:06.014 READ: bw=347KiB/s (355kB/s), 87.4KiB/s-92.6KiB/s (89.5kB/s-94.8kB/s), io=360KiB (369kB), run=1002-1037msec 00:12:06.014 WRITE: bw=7900KiB/s (8089kB/s), 1975KiB/s-2044KiB/s (2022kB/s-2093kB/s), io=8192KiB (8389kB), run=1002-1037msec 00:12:06.014 00:12:06.014 Disk stats (read/write): 00:12:06.014 nvme0n1: ios=68/512, merge=0/0, ticks=806/94, in_queue=900, util=91.28% 00:12:06.014 nvme0n2: ios=43/512, merge=0/0, ticks=1728/84, in_queue=1812, util=98.58% 00:12:06.014 nvme0n3: ios=43/512, merge=0/0, ticks=883/85, in_queue=968, util=92.52% 00:12:06.014 nvme0n4: ios=43/512, merge=0/0, ticks=974/90, in_queue=1064, util=96.44% 00:12:06.014 08:09:19 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:12:06.014 [global] 00:12:06.014 thread=1 00:12:06.014 invalidate=1 00:12:06.014 rw=write 00:12:06.014 time_based=1 00:12:06.014 runtime=1 00:12:06.014 ioengine=libaio 00:12:06.014 direct=1 00:12:06.014 bs=4096 00:12:06.015 iodepth=128 00:12:06.015 norandommap=0 00:12:06.015 numjobs=1 00:12:06.015 00:12:06.015 verify_dump=1 00:12:06.015 verify_backlog=512 00:12:06.015 verify_state_save=0 00:12:06.015 do_verify=1 00:12:06.015 verify=crc32c-intel 00:12:06.015 [job0] 00:12:06.015 filename=/dev/nvme0n1 00:12:06.015 [job1] 00:12:06.015 filename=/dev/nvme0n2 00:12:06.015 [job2] 00:12:06.015 filename=/dev/nvme0n3 00:12:06.015 [job3] 00:12:06.015 filename=/dev/nvme0n4 00:12:06.015 Could not set queue depth (nvme0n1) 00:12:06.015 Could not set queue depth (nvme0n2) 00:12:06.015 Could not set queue depth (nvme0n3) 00:12:06.015 Could not set queue depth (nvme0n4) 00:12:06.015 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:06.015 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:06.015 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:06.015 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:06.015 fio-3.35 00:12:06.015 Starting 4 threads 00:12:07.405 00:12:07.405 job0: (groupid=0, jobs=1): err= 0: pid=1587297: Wed Nov 20 08:09:21 2024 00:12:07.405 read: IOPS=5210, BW=20.4MiB/s (21.3MB/s)(20.5MiB/1007msec) 00:12:07.405 slat (nsec): min=1263, max=10915k, avg=100040.04, stdev=727352.03 00:12:07.405 clat (usec): min=3973, max=22059, avg=12350.23, stdev=2986.89 00:12:07.405 lat (usec): min=3979, max=27694, avg=12450.27, stdev=3043.69 00:12:07.405 clat percentiles (usec): 00:12:07.405 | 1.00th=[ 5276], 5.00th=[ 9110], 10.00th=[ 9372], 20.00th=[10814], 00:12:07.405 | 30.00th=[11207], 40.00th=[11338], 50.00th=[11469], 60.00th=[11863], 00:12:07.405 | 70.00th=[12125], 80.00th=[14615], 90.00th=[17433], 95.00th=[18744], 00:12:07.405 | 99.00th=[20841], 99.50th=[21103], 99.90th=[21627], 99.95th=[22152], 00:12:07.405 | 99.99th=[22152] 00:12:07.405 write: IOPS=5592, BW=21.8MiB/s (22.9MB/s)(22.0MiB/1007msec); 0 zone resets 00:12:07.405 slat (usec): min=2, max=9358, avg=77.95, stdev=374.37 00:12:07.405 clat (usec): min=1176, max=31729, avg=11164.34, stdev=3639.01 00:12:07.405 lat (usec): min=1218, max=31758, avg=11242.30, stdev=3669.71 00:12:07.405 clat percentiles (usec): 00:12:07.405 | 1.00th=[ 3720], 5.00th=[ 6063], 10.00th=[ 7046], 20.00th=[ 9634], 00:12:07.405 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11600], 60.00th=[11731], 00:12:07.405 | 70.00th=[11731], 80.00th=[11863], 90.00th=[11994], 95.00th=[15401], 00:12:07.405 | 99.00th=[29230], 99.50th=[29492], 99.90th=[31589], 99.95th=[31589], 00:12:07.405 | 99.99th=[31851] 00:12:07.405 bw ( KiB/s): min=21680, max=23368, per=27.13%, avg=22524.00, stdev=1193.60, samples=2 00:12:07.405 iops : min= 5420, max= 5842, avg=5631.00, stdev=298.40, samples=2 00:12:07.405 lat (msec) : 2=0.01%, 4=0.84%, 10=19.13%, 20=77.08%, 50=2.94% 00:12:07.405 cpu : usr=3.48%, sys=5.96%, ctx=667, majf=0, minf=2 00:12:07.405 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:12:07.405 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.405 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:07.405 issued rwts: total=5247,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.405 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:07.405 job1: (groupid=0, jobs=1): err= 0: pid=1587300: Wed Nov 20 08:09:21 2024 00:12:07.405 read: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec) 00:12:07.405 slat (nsec): min=1132, max=16684k, avg=96627.00, stdev=611005.42 00:12:07.405 clat (usec): min=6718, max=34089, avg=12248.90, stdev=2266.73 00:12:07.405 lat (usec): min=6724, max=34117, avg=12345.53, stdev=2316.59 00:12:07.405 clat percentiles (usec): 00:12:07.405 | 1.00th=[ 6980], 5.00th=[ 9503], 10.00th=[10159], 20.00th=[11076], 00:12:07.405 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11731], 60.00th=[11863], 00:12:07.405 | 70.00th=[12518], 80.00th=[13042], 90.00th=[14222], 95.00th=[17957], 00:12:07.406 | 99.00th=[19792], 99.50th=[19792], 99.90th=[20317], 99.95th=[23987], 00:12:07.406 | 99.99th=[34341] 00:12:07.406 write: IOPS=5014, BW=19.6MiB/s (20.5MB/s)(19.6MiB/1003msec); 0 zone resets 00:12:07.406 slat (usec): min=2, max=20562, avg=104.65, stdev=704.45 00:12:07.406 clat (usec): min=2452, max=62263, avg=13556.42, stdev=7697.77 00:12:07.406 lat (usec): min=2463, max=62271, avg=13661.07, stdev=7755.68 00:12:07.406 clat percentiles (usec): 00:12:07.406 | 1.00th=[ 6456], 5.00th=[10028], 10.00th=[10945], 20.00th=[11469], 00:12:07.406 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11731], 60.00th=[11863], 00:12:07.406 | 70.00th=[11994], 80.00th=[13304], 90.00th=[15401], 95.00th=[20841], 00:12:07.406 | 99.00th=[60031], 99.50th=[62129], 99.90th=[62129], 99.95th=[62129], 00:12:07.406 | 99.99th=[62129] 00:12:07.406 bw ( KiB/s): min=16984, max=22240, per=23.62%, avg=19612.00, stdev=3716.55, samples=2 00:12:07.406 iops : min= 4246, max= 5560, avg=4903.00, stdev=929.14, samples=2 00:12:07.406 lat (msec) : 4=0.13%, 10=6.56%, 20=90.39%, 50=1.94%, 100=0.98% 00:12:07.406 cpu : usr=4.49%, sys=4.39%, ctx=460, majf=0, minf=1 00:12:07.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:12:07.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:07.406 issued rwts: total=4608,5030,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.406 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:07.406 job2: (groupid=0, jobs=1): err= 0: pid=1587305: Wed Nov 20 08:09:21 2024 00:12:07.406 read: IOPS=4621, BW=18.1MiB/s (18.9MB/s)(18.1MiB/1005msec) 00:12:07.406 slat (nsec): min=1488, max=8935.4k, avg=104219.20, stdev=590011.66 00:12:07.406 clat (usec): min=2331, max=32840, avg=13358.77, stdev=2937.91 00:12:07.406 lat (usec): min=4466, max=32867, avg=13462.99, stdev=2983.19 00:12:07.406 clat percentiles (usec): 00:12:07.406 | 1.00th=[ 8291], 5.00th=[10159], 10.00th=[11076], 20.00th=[12125], 00:12:07.406 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13173], 00:12:07.406 | 70.00th=[13304], 80.00th=[13829], 90.00th=[15401], 95.00th=[20841], 00:12:07.406 | 99.00th=[25822], 99.50th=[26870], 99.90th=[26870], 99.95th=[30016], 00:12:07.406 | 99.99th=[32900] 00:12:07.406 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:12:07.406 slat (usec): min=2, max=4850, avg=94.66, stdev=515.61 00:12:07.406 clat (usec): min=4859, max=23603, avg=12703.39, stdev=1475.70 00:12:07.406 lat (usec): min=4870, max=23628, avg=12798.05, stdev=1535.64 00:12:07.406 clat percentiles (usec): 00:12:07.406 | 1.00th=[ 7635], 5.00th=[10421], 10.00th=[11207], 20.00th=[11863], 00:12:07.406 | 30.00th=[12649], 40.00th=[12780], 50.00th=[12911], 60.00th=[13042], 00:12:07.406 | 70.00th=[13173], 80.00th=[13304], 90.00th=[13566], 95.00th=[14746], 00:12:07.406 | 99.00th=[17957], 99.50th=[18744], 99.90th=[18744], 99.95th=[18744], 00:12:07.406 | 99.99th=[23725] 00:12:07.406 bw ( KiB/s): min=19752, max=20480, per=24.23%, avg=20116.00, stdev=514.77, samples=2 00:12:07.406 iops : min= 4938, max= 5120, avg=5029.00, stdev=128.69, samples=2 00:12:07.406 lat (msec) : 4=0.01%, 10=4.21%, 20=93.17%, 50=2.61% 00:12:07.406 cpu : usr=5.18%, sys=5.88%, ctx=412, majf=0, minf=1 00:12:07.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:07.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:07.406 issued rwts: total=4645,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.406 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:07.406 job3: (groupid=0, jobs=1): err= 0: pid=1587306: Wed Nov 20 08:09:21 2024 00:12:07.406 read: IOPS=5038, BW=19.7MiB/s (20.6MB/s)(19.8MiB/1006msec) 00:12:07.406 slat (nsec): min=1388, max=12175k, avg=112184.36, stdev=795334.04 00:12:07.406 clat (usec): min=3373, max=25014, avg=13573.63, stdev=3670.14 00:12:07.406 lat (usec): min=4578, max=25039, avg=13685.82, stdev=3711.68 00:12:07.406 clat percentiles (usec): 00:12:07.406 | 1.00th=[ 5014], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[10945], 00:12:07.406 | 30.00th=[11863], 40.00th=[12518], 50.00th=[12780], 60.00th=[13042], 00:12:07.406 | 70.00th=[13435], 80.00th=[16581], 90.00th=[19530], 95.00th=[21365], 00:12:07.406 | 99.00th=[23462], 99.50th=[23987], 99.90th=[24773], 99.95th=[24773], 00:12:07.406 | 99.99th=[25035] 00:12:07.406 write: IOPS=5089, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1006msec); 0 zone resets 00:12:07.406 slat (usec): min=2, max=2914, avg=78.83, stdev=231.23 00:12:07.406 clat (usec): min=1577, max=24710, avg=11436.25, stdev=2598.60 00:12:07.406 lat (usec): min=1590, max=24714, avg=11515.08, stdev=2615.51 00:12:07.406 clat percentiles (usec): 00:12:07.406 | 1.00th=[ 3490], 5.00th=[ 5407], 10.00th=[ 7308], 20.00th=[10159], 00:12:07.406 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12649], 60.00th=[12911], 00:12:07.406 | 70.00th=[12911], 80.00th=[13042], 90.00th=[13173], 95.00th=[13304], 00:12:07.406 | 99.00th=[13566], 99.50th=[13698], 99.90th=[24249], 99.95th=[24249], 00:12:07.406 | 99.99th=[24773] 00:12:07.406 bw ( KiB/s): min=19920, max=21040, per=24.67%, avg=20480.00, stdev=791.96, samples=2 00:12:07.406 iops : min= 4980, max= 5260, avg=5120.00, stdev=197.99, samples=2 00:12:07.406 lat (msec) : 2=0.11%, 4=0.79%, 10=13.20%, 20=81.44%, 50=4.46% 00:12:07.406 cpu : usr=4.08%, sys=5.17%, ctx=714, majf=0, minf=1 00:12:07.406 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:07.406 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:07.406 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:07.406 issued rwts: total=5069,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:07.406 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:07.406 00:12:07.406 Run status group 0 (all jobs): 00:12:07.406 READ: bw=75.9MiB/s (79.6MB/s), 17.9MiB/s-20.4MiB/s (18.8MB/s-21.3MB/s), io=76.4MiB (80.2MB), run=1003-1007msec 00:12:07.406 WRITE: bw=81.1MiB/s (85.0MB/s), 19.6MiB/s-21.8MiB/s (20.5MB/s-22.9MB/s), io=81.6MiB (85.6MB), run=1003-1007msec 00:12:07.406 00:12:07.406 Disk stats (read/write): 00:12:07.406 nvme0n1: ios=4516/4608, merge=0/0, ticks=53637/51338, in_queue=104975, util=86.97% 00:12:07.406 nvme0n2: ios=3974/4096, merge=0/0, ticks=20913/19450, in_queue=40363, util=87.21% 00:12:07.406 nvme0n3: ios=4154/4172, merge=0/0, ticks=21594/18147, in_queue=39741, util=98.34% 00:12:07.406 nvme0n4: ios=4135/4559, merge=0/0, ticks=53852/51109, in_queue=104961, util=95.80% 00:12:07.406 08:09:21 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:12:07.406 [global] 00:12:07.406 thread=1 00:12:07.406 invalidate=1 00:12:07.406 rw=randwrite 00:12:07.406 time_based=1 00:12:07.406 runtime=1 00:12:07.406 ioengine=libaio 00:12:07.406 direct=1 00:12:07.406 bs=4096 00:12:07.406 iodepth=128 00:12:07.406 norandommap=0 00:12:07.406 numjobs=1 00:12:07.406 00:12:07.406 verify_dump=1 00:12:07.406 verify_backlog=512 00:12:07.406 verify_state_save=0 00:12:07.406 do_verify=1 00:12:07.406 verify=crc32c-intel 00:12:07.406 [job0] 00:12:07.406 filename=/dev/nvme0n1 00:12:07.406 [job1] 00:12:07.406 filename=/dev/nvme0n2 00:12:07.406 [job2] 00:12:07.406 filename=/dev/nvme0n3 00:12:07.406 [job3] 00:12:07.406 filename=/dev/nvme0n4 00:12:07.406 Could not set queue depth (nvme0n1) 00:12:07.406 Could not set queue depth (nvme0n2) 00:12:07.406 Could not set queue depth (nvme0n3) 00:12:07.406 Could not set queue depth (nvme0n4) 00:12:07.673 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:07.673 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:07.673 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:07.673 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:12:07.673 fio-3.35 00:12:07.673 Starting 4 threads 00:12:09.117 00:12:09.117 job0: (groupid=0, jobs=1): err= 0: pid=1587677: Wed Nov 20 08:09:22 2024 00:12:09.117 read: IOPS=4702, BW=18.4MiB/s (19.3MB/s)(19.2MiB/1047msec) 00:12:09.117 slat (nsec): min=1385, max=12302k, avg=107546.35, stdev=739964.52 00:12:09.117 clat (usec): min=3719, max=69423, avg=13986.51, stdev=7927.50 00:12:09.117 lat (usec): min=3726, max=69429, avg=14094.05, stdev=7967.28 00:12:09.117 clat percentiles (usec): 00:12:09.117 | 1.00th=[ 5145], 5.00th=[ 7898], 10.00th=[ 8717], 20.00th=[ 9634], 00:12:09.117 | 30.00th=[10028], 40.00th=[10945], 50.00th=[11469], 60.00th=[12125], 00:12:09.117 | 70.00th=[14484], 80.00th=[16909], 90.00th=[22676], 95.00th=[24249], 00:12:09.117 | 99.00th=[57934], 99.50th=[58459], 99.90th=[58459], 99.95th=[69731], 00:12:09.117 | 99.99th=[69731] 00:12:09.117 write: IOPS=4890, BW=19.1MiB/s (20.0MB/s)(20.0MiB/1047msec); 0 zone resets 00:12:09.117 slat (usec): min=2, max=11587, avg=83.87, stdev=475.90 00:12:09.117 clat (usec): min=572, max=35480, avg=12423.59, stdev=6724.17 00:12:09.117 lat (usec): min=580, max=35491, avg=12507.46, stdev=6776.93 00:12:09.117 clat percentiles (usec): 00:12:09.117 | 1.00th=[ 3359], 5.00th=[ 5211], 10.00th=[ 6456], 20.00th=[ 8586], 00:12:09.117 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10814], 00:12:09.117 | 70.00th=[11731], 80.00th=[14877], 90.00th=[23725], 95.00th=[29492], 00:12:09.117 | 99.00th=[32375], 99.50th=[33424], 99.90th=[35390], 99.95th=[35390], 00:12:09.117 | 99.99th=[35390] 00:12:09.117 bw ( KiB/s): min=16384, max=24576, per=27.55%, avg=20480.00, stdev=5792.62, samples=2 00:12:09.117 iops : min= 4096, max= 6144, avg=5120.00, stdev=1448.15, samples=2 00:12:09.117 lat (usec) : 750=0.04%, 1000=0.09% 00:12:09.117 lat (msec) : 4=0.96%, 10=34.02%, 20=51.19%, 50=13.07%, 100=0.63% 00:12:09.117 cpu : usr=3.63%, sys=4.97%, ctx=550, majf=0, minf=2 00:12:09.117 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:12:09.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:09.117 issued rwts: total=4924,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.117 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:09.117 job1: (groupid=0, jobs=1): err= 0: pid=1587678: Wed Nov 20 08:09:22 2024 00:12:09.117 read: IOPS=5726, BW=22.4MiB/s (23.5MB/s)(22.5MiB/1004msec) 00:12:09.117 slat (nsec): min=1062, max=9028.3k, avg=82930.93, stdev=453308.53 00:12:09.117 clat (usec): min=1057, max=25803, avg=10871.41, stdev=2325.42 00:12:09.117 lat (usec): min=3743, max=25810, avg=10954.34, stdev=2310.61 00:12:09.117 clat percentiles (usec): 00:12:09.117 | 1.00th=[ 4686], 5.00th=[ 8586], 10.00th=[ 9372], 20.00th=[10028], 00:12:09.117 | 30.00th=[10159], 40.00th=[10421], 50.00th=[10552], 60.00th=[10552], 00:12:09.117 | 70.00th=[10945], 80.00th=[11731], 90.00th=[12125], 95.00th=[13173], 00:12:09.117 | 99.00th=[22676], 99.50th=[23200], 99.90th=[23200], 99.95th=[23462], 00:12:09.117 | 99.99th=[25822] 00:12:09.117 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:12:09.117 slat (nsec): min=1792, max=10116k, avg=80564.17, stdev=395349.17 00:12:09.117 clat (usec): min=5896, max=17111, avg=10478.01, stdev=1151.75 00:12:09.117 lat (usec): min=5904, max=20964, avg=10558.57, stdev=1135.69 00:12:09.117 clat percentiles (usec): 00:12:09.117 | 1.00th=[ 7046], 5.00th=[ 8455], 10.00th=[ 9110], 20.00th=[ 9765], 00:12:09.117 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:12:09.117 | 70.00th=[11207], 80.00th=[11600], 90.00th=[11994], 95.00th=[12125], 00:12:09.117 | 99.00th=[12649], 99.50th=[13304], 99.90th=[13829], 99.95th=[14222], 00:12:09.117 | 99.99th=[17171] 00:12:09.117 bw ( KiB/s): min=24488, max=24576, per=33.00%, avg=24532.00, stdev=62.23, samples=2 00:12:09.117 iops : min= 6122, max= 6144, avg=6133.00, stdev=15.56, samples=2 00:12:09.117 lat (msec) : 2=0.01%, 4=0.25%, 10=25.32%, 20=73.45%, 50=0.98% 00:12:09.117 cpu : usr=2.49%, sys=4.09%, ctx=702, majf=0, minf=1 00:12:09.117 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:12:09.117 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.117 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:09.117 issued rwts: total=5749,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.117 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:09.117 job2: (groupid=0, jobs=1): err= 0: pid=1587679: Wed Nov 20 08:09:22 2024 00:12:09.117 read: IOPS=3223, BW=12.6MiB/s (13.2MB/s)(12.7MiB/1009msec) 00:12:09.117 slat (nsec): min=1233, max=25771k, avg=150816.71, stdev=1095508.84 00:12:09.118 clat (usec): min=389, max=87684, avg=19601.85, stdev=13335.48 00:12:09.118 lat (usec): min=6930, max=87929, avg=19752.67, stdev=13422.25 00:12:09.118 clat percentiles (usec): 00:12:09.118 | 1.00th=[ 8029], 5.00th=[ 9765], 10.00th=[10683], 20.00th=[11469], 00:12:09.118 | 30.00th=[12649], 40.00th=[12780], 50.00th=[14615], 60.00th=[17433], 00:12:09.118 | 70.00th=[17957], 80.00th=[21890], 90.00th=[35914], 95.00th=[54789], 00:12:09.118 | 99.00th=[68682], 99.50th=[69731], 99.90th=[77071], 99.95th=[84411], 00:12:09.118 | 99.99th=[87557] 00:12:09.118 write: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec); 0 zone resets 00:12:09.118 slat (nsec): min=1921, max=22846k, avg=138965.33, stdev=937472.46 00:12:09.118 clat (usec): min=6513, max=90017, avg=17715.36, stdev=12476.95 00:12:09.118 lat (usec): min=6517, max=90025, avg=17854.32, stdev=12561.33 00:12:09.118 clat percentiles (usec): 00:12:09.118 | 1.00th=[ 7963], 5.00th=[10421], 10.00th=[10814], 20.00th=[11469], 00:12:09.118 | 30.00th=[12125], 40.00th=[12911], 50.00th=[13173], 60.00th=[14746], 00:12:09.118 | 70.00th=[15926], 80.00th=[19792], 90.00th=[26870], 95.00th=[41157], 00:12:09.118 | 99.00th=[85459], 99.50th=[87557], 99.90th=[89654], 99.95th=[89654], 00:12:09.118 | 99.99th=[89654] 00:12:09.118 bw ( KiB/s): min=12288, max=16384, per=19.29%, avg=14336.00, stdev=2896.31, samples=2 00:12:09.118 iops : min= 3072, max= 4096, avg=3584.00, stdev=724.08, samples=2 00:12:09.118 lat (usec) : 500=0.01% 00:12:09.118 lat (msec) : 10=4.70%, 20=74.64%, 50=15.93%, 100=4.72% 00:12:09.118 cpu : usr=2.38%, sys=3.47%, ctx=296, majf=0, minf=1 00:12:09.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:12:09.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:09.118 issued rwts: total=3253,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:09.118 job3: (groupid=0, jobs=1): err= 0: pid=1587680: Wed Nov 20 08:09:22 2024 00:12:09.118 read: IOPS=4252, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1008msec) 00:12:09.118 slat (nsec): min=1300, max=11893k, avg=112479.90, stdev=743328.88 00:12:09.118 clat (usec): min=4488, max=34281, avg=13960.35, stdev=3725.18 00:12:09.118 lat (usec): min=4494, max=34287, avg=14072.83, stdev=3784.67 00:12:09.118 clat percentiles (usec): 00:12:09.118 | 1.00th=[ 9110], 5.00th=[10290], 10.00th=[11076], 20.00th=[11338], 00:12:09.118 | 30.00th=[11731], 40.00th=[12125], 50.00th=[12780], 60.00th=[13435], 00:12:09.118 | 70.00th=[15401], 80.00th=[16188], 90.00th=[17957], 95.00th=[21627], 00:12:09.118 | 99.00th=[27395], 99.50th=[31589], 99.90th=[34341], 99.95th=[34341], 00:12:09.118 | 99.99th=[34341] 00:12:09.118 write: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec); 0 zone resets 00:12:09.118 slat (nsec): min=1867, max=11696k, avg=107145.80, stdev=606016.60 00:12:09.118 clat (usec): min=1905, max=47760, avg=14694.78, stdev=7729.84 00:12:09.118 lat (usec): min=1918, max=47767, avg=14801.93, stdev=7784.94 00:12:09.118 clat percentiles (usec): 00:12:09.118 | 1.00th=[ 5080], 5.00th=[ 8291], 10.00th=[ 9503], 20.00th=[10945], 00:12:09.118 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11994], 60.00th=[12911], 00:12:09.118 | 70.00th=[13829], 80.00th=[15795], 90.00th=[22414], 95.00th=[34866], 00:12:09.118 | 99.00th=[44827], 99.50th=[46400], 99.90th=[47973], 99.95th=[47973], 00:12:09.118 | 99.99th=[47973] 00:12:09.118 bw ( KiB/s): min=16384, max=20480, per=24.80%, avg=18432.00, stdev=2896.31, samples=2 00:12:09.118 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:12:09.118 lat (msec) : 2=0.03%, 4=0.16%, 10=7.62%, 20=81.54%, 50=10.65% 00:12:09.118 cpu : usr=2.98%, sys=5.26%, ctx=430, majf=0, minf=1 00:12:09.118 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:12:09.118 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:09.118 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:12:09.118 issued rwts: total=4287,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:09.118 latency : target=0, window=0, percentile=100.00%, depth=128 00:12:09.118 00:12:09.118 Run status group 0 (all jobs): 00:12:09.118 READ: bw=68.0MiB/s (71.3MB/s), 12.6MiB/s-22.4MiB/s (13.2MB/s-23.5MB/s), io=71.1MiB (74.6MB), run=1004-1047msec 00:12:09.118 WRITE: bw=72.6MiB/s (76.1MB/s), 13.9MiB/s-23.9MiB/s (14.5MB/s-25.1MB/s), io=76.0MiB (79.7MB), run=1004-1047msec 00:12:09.118 00:12:09.118 Disk stats (read/write): 00:12:09.118 nvme0n1: ios=4146/4369, merge=0/0, ticks=42763/50492, in_queue=93255, util=87.17% 00:12:09.118 nvme0n2: ios=5036/5120, merge=0/0, ticks=15869/12966, in_queue=28835, util=98.48% 00:12:09.118 nvme0n3: ios=2961/3072, merge=0/0, ticks=21399/19864, in_queue=41263, util=98.86% 00:12:09.118 nvme0n4: ios=3584/3783, merge=0/0, ticks=30191/38031, in_queue=68222, util=89.62% 00:12:09.118 08:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:12:09.118 08:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1587910 00:12:09.118 08:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:12:09.118 08:09:22 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:12:09.118 [global] 00:12:09.118 thread=1 00:12:09.118 invalidate=1 00:12:09.118 rw=read 00:12:09.118 time_based=1 00:12:09.118 runtime=10 00:12:09.118 ioengine=libaio 00:12:09.118 direct=1 00:12:09.118 bs=4096 00:12:09.118 iodepth=1 00:12:09.118 norandommap=1 00:12:09.118 numjobs=1 00:12:09.118 00:12:09.118 [job0] 00:12:09.118 filename=/dev/nvme0n1 00:12:09.118 [job1] 00:12:09.118 filename=/dev/nvme0n2 00:12:09.118 [job2] 00:12:09.118 filename=/dev/nvme0n3 00:12:09.118 [job3] 00:12:09.118 filename=/dev/nvme0n4 00:12:09.118 Could not set queue depth (nvme0n1) 00:12:09.118 Could not set queue depth (nvme0n2) 00:12:09.118 Could not set queue depth (nvme0n3) 00:12:09.118 Could not set queue depth (nvme0n4) 00:12:09.374 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:09.374 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:09.374 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:09.374 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:12:09.374 fio-3.35 00:12:09.374 Starting 4 threads 00:12:11.894 08:09:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:12:12.151 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:12:12.151 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:12:12.151 fio: pid=1588055, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:12.408 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:12.408 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:12:12.408 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=294912, buflen=4096 00:12:12.408 fio: pid=1588054, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:12.666 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=323584, buflen=4096 00:12:12.666 fio: pid=1588052, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:12.666 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:12.666 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:12:12.666 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=35352576, buflen=4096 00:12:12.666 fio: pid=1588053, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:12:12.923 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:12.923 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:12:12.923 00:12:12.923 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1588052: Wed Nov 20 08:09:26 2024 00:12:12.923 read: IOPS=25, BW=102KiB/s (104kB/s)(316KiB/3106msec) 00:12:12.923 slat (nsec): min=9514, max=89011, avg=24314.60, stdev=10894.43 00:12:12.923 clat (usec): min=280, max=41999, avg=39019.16, stdev=8999.72 00:12:12.923 lat (usec): min=305, max=42022, avg=39043.50, stdev=8998.00 00:12:12.923 clat percentiles (usec): 00:12:12.923 | 1.00th=[ 281], 5.00th=[ 355], 10.00th=[40633], 20.00th=[41157], 00:12:12.923 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:12.923 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:12:12.923 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:12.923 | 99.99th=[42206] 00:12:12.923 bw ( KiB/s): min= 96, max= 112, per=0.93%, avg=100.80, stdev= 7.16, samples=5 00:12:12.923 iops : min= 24, max= 28, avg=25.20, stdev= 1.79, samples=5 00:12:12.923 lat (usec) : 500=5.00% 00:12:12.923 lat (msec) : 50=93.75% 00:12:12.923 cpu : usr=0.10%, sys=0.00%, ctx=83, majf=0, minf=1 00:12:12.923 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.923 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.923 complete : 0=1.2%, 4=98.8%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.923 issued rwts: total=80,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.923 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.923 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1588053: Wed Nov 20 08:09:26 2024 00:12:12.923 read: IOPS=2616, BW=10.2MiB/s (10.7MB/s)(33.7MiB/3299msec) 00:12:12.923 slat (usec): min=6, max=7684, avg=10.55, stdev=116.75 00:12:12.924 clat (usec): min=144, max=41972, avg=367.14, stdev=2322.33 00:12:12.924 lat (usec): min=175, max=44958, avg=377.69, stdev=2333.70 00:12:12.924 clat percentiles (usec): 00:12:12.924 | 1.00th=[ 178], 5.00th=[ 188], 10.00th=[ 196], 20.00th=[ 210], 00:12:12.924 | 30.00th=[ 227], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 245], 00:12:12.924 | 70.00th=[ 249], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 265], 00:12:12.924 | 99.00th=[ 277], 99.50th=[ 379], 99.90th=[41157], 99.95th=[41157], 00:12:12.924 | 99.99th=[42206] 00:12:12.924 bw ( KiB/s): min= 99, max=17752, per=100.00%, avg=11095.17, stdev=7653.18, samples=6 00:12:12.924 iops : min= 24, max= 4438, avg=2773.67, stdev=1913.51, samples=6 00:12:12.924 lat (usec) : 250=70.42%, 500=29.22%, 750=0.02% 00:12:12.924 lat (msec) : 50=0.32% 00:12:12.924 cpu : usr=1.55%, sys=4.18%, ctx=8637, majf=0, minf=2 00:12:12.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.924 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.924 issued rwts: total=8632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.924 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1588054: Wed Nov 20 08:09:26 2024 00:12:12.924 read: IOPS=25, BW=98.6KiB/s (101kB/s)(288KiB/2920msec) 00:12:12.924 slat (usec): min=10, max=17711, avg=264.67, stdev=2070.36 00:12:12.924 clat (usec): min=280, max=42010, avg=39975.39, stdev=6752.39 00:12:12.924 lat (usec): min=307, max=58860, avg=40243.42, stdev=7107.00 00:12:12.924 clat percentiles (usec): 00:12:12.924 | 1.00th=[ 281], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:12:12.924 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:12.924 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:12:12.924 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:12.924 | 99.99th=[42206] 00:12:12.924 bw ( KiB/s): min= 96, max= 104, per=0.92%, avg=99.20, stdev= 4.38, samples=5 00:12:12.924 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:12:12.924 lat (usec) : 500=2.74% 00:12:12.924 lat (msec) : 50=95.89% 00:12:12.924 cpu : usr=0.10%, sys=0.00%, ctx=74, majf=0, minf=2 00:12:12.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.924 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.924 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.924 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1588055: Wed Nov 20 08:09:26 2024 00:12:12.924 read: IOPS=24, BW=98.2KiB/s (101kB/s)(268KiB/2730msec) 00:12:12.924 slat (nsec): min=12835, max=30875, avg=23598.44, stdev=1805.89 00:12:12.924 clat (usec): min=429, max=41984, avg=40384.74, stdev=4957.31 00:12:12.924 lat (usec): min=460, max=42007, avg=40408.34, stdev=4956.41 00:12:12.924 clat percentiles (usec): 00:12:12.924 | 1.00th=[ 429], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:12:12.924 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:12:12.924 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:12:12.924 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:12:12.924 | 99.99th=[42206] 00:12:12.924 bw ( KiB/s): min= 96, max= 104, per=0.92%, avg=99.20, stdev= 4.38, samples=5 00:12:12.924 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:12:12.924 lat (usec) : 500=1.47% 00:12:12.924 lat (msec) : 50=97.06% 00:12:12.924 cpu : usr=0.11%, sys=0.00%, ctx=69, majf=0, minf=2 00:12:12.924 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:12:12.924 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.924 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:12.924 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:12.924 latency : target=0, window=0, percentile=100.00%, depth=1 00:12:12.924 00:12:12.924 Run status group 0 (all jobs): 00:12:12.924 READ: bw=10.5MiB/s (11.0MB/s), 98.2KiB/s-10.2MiB/s (101kB/s-10.7MB/s), io=34.6MiB (36.2MB), run=2730-3299msec 00:12:12.924 00:12:12.924 Disk stats (read/write): 00:12:12.924 nvme0n1: ios=73/0, merge=0/0, ticks=2839/0, in_queue=2839, util=95.43% 00:12:12.924 nvme0n2: ios=8324/0, merge=0/0, ticks=2905/0, in_queue=2905, util=96.04% 00:12:12.924 nvme0n3: ios=117/0, merge=0/0, ticks=2972/0, in_queue=2972, util=99.26% 00:12:12.924 nvme0n4: ios=111/0, merge=0/0, ticks=2768/0, in_queue=2768, util=99.11% 00:12:12.924 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:12.924 08:09:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:12:13.181 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:13.181 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:12:13.438 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:13.438 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:12:13.695 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:12:13.695 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:12:13.953 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:12:13.953 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 1587910 00:12:13.953 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:12:13.953 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:13.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:13.953 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:13.953 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:12:13.953 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:13.953 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.953 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:13.953 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:13.953 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:12:13.953 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:12:13.953 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:12:13.953 nvmf hotplug test: fio failed as expected 00:12:13.953 08:09:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:14.211 rmmod nvme_tcp 00:12:14.211 rmmod nvme_fabrics 00:12:14.211 rmmod nvme_keyring 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 1584981 ']' 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 1584981 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1584981 ']' 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1584981 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1584981 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1584981' 00:12:14.211 killing process with pid 1584981 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1584981 00:12:14.211 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1584981 00:12:14.469 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:14.469 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:12:14.469 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@254 -- # local dev 00:12:14.470 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:12:14.470 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:14.470 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:14.470 08:09:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@121 -- # return 0 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@274 -- # iptr 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-save 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-restore 00:12:17.005 00:12:17.005 real 0m27.149s 00:12:17.005 user 1m48.144s 00:12:17.005 sys 0m8.261s 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:12:17.005 ************************************ 00:12:17.005 END TEST nvmf_fio_target 00:12:17.005 ************************************ 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:17.005 ************************************ 00:12:17.005 START TEST nvmf_bdevio 00:12:17.005 ************************************ 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:12:17.005 * Looking for test storage... 00:12:17.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.005 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:17.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.006 --rc genhtml_branch_coverage=1 00:12:17.006 --rc genhtml_function_coverage=1 00:12:17.006 --rc genhtml_legend=1 00:12:17.006 --rc geninfo_all_blocks=1 00:12:17.006 --rc geninfo_unexecuted_blocks=1 00:12:17.006 00:12:17.006 ' 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:17.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.006 --rc genhtml_branch_coverage=1 00:12:17.006 --rc genhtml_function_coverage=1 00:12:17.006 --rc genhtml_legend=1 00:12:17.006 --rc geninfo_all_blocks=1 00:12:17.006 --rc geninfo_unexecuted_blocks=1 00:12:17.006 00:12:17.006 ' 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:17.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.006 --rc genhtml_branch_coverage=1 00:12:17.006 --rc genhtml_function_coverage=1 00:12:17.006 --rc genhtml_legend=1 00:12:17.006 --rc geninfo_all_blocks=1 00:12:17.006 --rc geninfo_unexecuted_blocks=1 00:12:17.006 00:12:17.006 ' 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:17.006 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.006 --rc genhtml_branch_coverage=1 00:12:17.006 --rc genhtml_function_coverage=1 00:12:17.006 --rc genhtml_legend=1 00:12:17.006 --rc geninfo_all_blocks=1 00:12:17.006 --rc geninfo_unexecuted_blocks=1 00:12:17.006 00:12:17.006 ' 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:17.006 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # xtrace_disable 00:12:17.006 08:09:30 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@131 -- # pci_devs=() 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@135 -- # net_devs=() 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@136 -- # e810=() 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@136 -- # local -ga e810 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@137 -- # x722=() 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@137 -- # local -ga x722 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@138 -- # mlx=() 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@138 -- # local -ga mlx 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:23.573 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:23.573 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:23.573 Found net devices under 0000:86:00.0: cvl_0_0 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:23.573 Found net devices under 0000:86:00.1: cvl_0_1 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # is_hw=yes 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@247 -- # create_target_ns 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:12:23.573 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:12:23.574 10.0.0.1 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:12:23.574 10.0.0.2 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:23.574 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.574 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.319 ms 00:12:23.574 00:12:23.574 --- 10.0.0.1 ping statistics --- 00:12:23.574 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.574 rtt min/avg/max/mdev = 0.319/0.319/0.319/0.000 ms 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:23.574 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:12:23.575 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.575 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:12:23.575 00:12:23.575 --- 10.0.0.2 ping statistics --- 00:12:23.575 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.575 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair++ )) 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # return 0 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator1 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # return 1 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev= 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@160 -- # return 0 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target1 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target1 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # return 1 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev= 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@160 -- # return 0 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:12:23.575 ' 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=1592472 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 1592472 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1592472 ']' 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.575 08:09:36 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:23.575 [2024-11-20 08:09:36.904752] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:12:23.575 [2024-11-20 08:09:36.904808] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.575 [2024-11-20 08:09:36.985415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.575 [2024-11-20 08:09:37.025919] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.575 [2024-11-20 08:09:37.025956] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.575 [2024-11-20 08:09:37.025963] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.575 [2024-11-20 08:09:37.025970] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.575 [2024-11-20 08:09:37.025976] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.575 [2024-11-20 08:09:37.027642] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:12:23.575 [2024-11-20 08:09:37.027738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:12:23.575 [2024-11-20 08:09:37.027824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.575 [2024-11-20 08:09:37.027824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:12:23.575 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:23.576 [2024-11-20 08:09:37.171672] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:23.576 Malloc0 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:23.576 [2024-11-20 08:09:37.240185] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:12:23.576 { 00:12:23.576 "params": { 00:12:23.576 "name": "Nvme$subsystem", 00:12:23.576 "trtype": "$TEST_TRANSPORT", 00:12:23.576 "traddr": "$NVMF_FIRST_TARGET_IP", 00:12:23.576 "adrfam": "ipv4", 00:12:23.576 "trsvcid": "$NVMF_PORT", 00:12:23.576 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:12:23.576 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:12:23.576 "hdgst": ${hdgst:-false}, 00:12:23.576 "ddgst": ${ddgst:-false} 00:12:23.576 }, 00:12:23.576 "method": "bdev_nvme_attach_controller" 00:12:23.576 } 00:12:23.576 EOF 00:12:23.576 )") 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:12:23.576 08:09:37 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:12:23.576 "params": { 00:12:23.576 "name": "Nvme1", 00:12:23.576 "trtype": "tcp", 00:12:23.576 "traddr": "10.0.0.2", 00:12:23.576 "adrfam": "ipv4", 00:12:23.576 "trsvcid": "4420", 00:12:23.576 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:12:23.576 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:12:23.576 "hdgst": false, 00:12:23.576 "ddgst": false 00:12:23.576 }, 00:12:23.576 "method": "bdev_nvme_attach_controller" 00:12:23.576 }' 00:12:23.576 [2024-11-20 08:09:37.292278] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:12:23.576 [2024-11-20 08:09:37.292322] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1592571 ] 00:12:23.576 [2024-11-20 08:09:37.368851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:23.576 [2024-11-20 08:09:37.412360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.576 [2024-11-20 08:09:37.412469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.576 [2024-11-20 08:09:37.412469] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.576 I/O targets: 00:12:23.576 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:12:23.576 00:12:23.576 00:12:23.576 CUnit - A unit testing framework for C - Version 2.1-3 00:12:23.576 http://cunit.sourceforge.net/ 00:12:23.576 00:12:23.576 00:12:23.576 Suite: bdevio tests on: Nvme1n1 00:12:23.833 Test: blockdev write read block ...passed 00:12:23.833 Test: blockdev write zeroes read block ...passed 00:12:23.833 Test: blockdev write zeroes read no split ...passed 00:12:23.833 Test: blockdev write zeroes read split ...passed 00:12:23.833 Test: blockdev write zeroes read split partial ...passed 00:12:23.833 Test: blockdev reset ...[2024-11-20 08:09:37.725069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:12:23.833 [2024-11-20 08:09:37.725130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23cf340 (9): Bad file descriptor 00:12:23.833 [2024-11-20 08:09:37.737794] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:12:23.833 passed 00:12:23.833 Test: blockdev write read 8 blocks ...passed 00:12:23.833 Test: blockdev write read size > 128k ...passed 00:12:23.833 Test: blockdev write read invalid size ...passed 00:12:23.833 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:12:23.833 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:12:23.833 Test: blockdev write read max offset ...passed 00:12:24.089 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:12:24.089 Test: blockdev writev readv 8 blocks ...passed 00:12:24.089 Test: blockdev writev readv 30 x 1block ...passed 00:12:24.090 Test: blockdev writev readv block ...passed 00:12:24.090 Test: blockdev writev readv size > 128k ...passed 00:12:24.090 Test: blockdev writev readv size > 128k in two iovs ...passed 00:12:24.090 Test: blockdev comparev and writev ...[2024-11-20 08:09:37.950932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.090 [2024-11-20 08:09:37.950959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:12:24.090 [2024-11-20 08:09:37.950973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.090 [2024-11-20 08:09:37.950981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:12:24.090 [2024-11-20 08:09:37.951218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.090 [2024-11-20 08:09:37.951228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:12:24.090 [2024-11-20 08:09:37.951239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.090 [2024-11-20 08:09:37.951246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:12:24.090 [2024-11-20 08:09:37.951469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.090 [2024-11-20 08:09:37.951479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:12:24.090 [2024-11-20 08:09:37.951490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.090 [2024-11-20 08:09:37.951496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:12:24.090 [2024-11-20 08:09:37.951720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.090 [2024-11-20 08:09:37.951730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:12:24.090 [2024-11-20 08:09:37.951741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:12:24.090 [2024-11-20 08:09:37.951751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:12:24.090 passed 00:12:24.090 Test: blockdev nvme passthru rw ...passed 00:12:24.090 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:09:38.034553] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:24.090 [2024-11-20 08:09:38.034570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:12:24.090 [2024-11-20 08:09:38.034674] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:24.090 [2024-11-20 08:09:38.034683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:12:24.090 [2024-11-20 08:09:38.034779] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:24.090 [2024-11-20 08:09:38.034788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:12:24.090 [2024-11-20 08:09:38.034887] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:12:24.090 [2024-11-20 08:09:38.034896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:12:24.090 passed 00:12:24.090 Test: blockdev nvme admin passthru ...passed 00:12:24.090 Test: blockdev copy ...passed 00:12:24.090 00:12:24.090 Run Summary: Type Total Ran Passed Failed Inactive 00:12:24.090 suites 1 1 n/a 0 0 00:12:24.090 tests 23 23 23 0 0 00:12:24.090 asserts 152 152 152 0 n/a 00:12:24.090 00:12:24.090 Elapsed time = 1.042 seconds 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:24.347 rmmod nvme_tcp 00:12:24.347 rmmod nvme_fabrics 00:12:24.347 rmmod nvme_keyring 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 1592472 ']' 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 1592472 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1592472 ']' 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1592472 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1592472 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1592472' 00:12:24.347 killing process with pid 1592472 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1592472 00:12:24.347 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1592472 00:12:24.606 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:24.606 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:12:24.606 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@254 -- # local dev 00:12:24.606 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@257 -- # remove_target_ns 00:12:24.606 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:24.606 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:24.606 08:09:38 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@258 -- # delete_main_bridge 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@121 -- # return 0 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@274 -- # iptr 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-save 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-restore 00:12:27.140 00:12:27.140 real 0m10.111s 00:12:27.140 user 0m9.624s 00:12:27.140 sys 0m5.129s 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:12:27.140 ************************************ 00:12:27.140 END TEST nvmf_bdevio 00:12:27.140 ************************************ 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:12:27.140 00:12:27.140 real 4m40.694s 00:12:27.140 user 10m31.834s 00:12:27.140 sys 1m38.591s 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:12:27.140 ************************************ 00:12:27.140 END TEST nvmf_target_core 00:12:27.140 ************************************ 00:12:27.140 08:09:40 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:27.140 08:09:40 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:27.140 08:09:40 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.140 08:09:40 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:27.140 ************************************ 00:12:27.140 START TEST nvmf_target_extra 00:12:27.140 ************************************ 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:12:27.140 * Looking for test storage... 00:12:27.140 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:12:27.140 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:27.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.141 --rc genhtml_branch_coverage=1 00:12:27.141 --rc genhtml_function_coverage=1 00:12:27.141 --rc genhtml_legend=1 00:12:27.141 --rc geninfo_all_blocks=1 00:12:27.141 --rc geninfo_unexecuted_blocks=1 00:12:27.141 00:12:27.141 ' 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:27.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.141 --rc genhtml_branch_coverage=1 00:12:27.141 --rc genhtml_function_coverage=1 00:12:27.141 --rc genhtml_legend=1 00:12:27.141 --rc geninfo_all_blocks=1 00:12:27.141 --rc geninfo_unexecuted_blocks=1 00:12:27.141 00:12:27.141 ' 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:27.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.141 --rc genhtml_branch_coverage=1 00:12:27.141 --rc genhtml_function_coverage=1 00:12:27.141 --rc genhtml_legend=1 00:12:27.141 --rc geninfo_all_blocks=1 00:12:27.141 --rc geninfo_unexecuted_blocks=1 00:12:27.141 00:12:27.141 ' 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:27.141 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.141 --rc genhtml_branch_coverage=1 00:12:27.141 --rc genhtml_function_coverage=1 00:12:27.141 --rc genhtml_legend=1 00:12:27.141 --rc geninfo_all_blocks=1 00:12:27.141 --rc geninfo_unexecuted_blocks=1 00:12:27.141 00:12:27.141 ' 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@50 -- # : 0 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:27.141 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:27.141 ************************************ 00:12:27.141 START TEST nvmf_example 00:12:27.141 ************************************ 00:12:27.141 08:09:40 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:12:27.141 * Looking for test storage... 00:12:27.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:27.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.142 --rc genhtml_branch_coverage=1 00:12:27.142 --rc genhtml_function_coverage=1 00:12:27.142 --rc genhtml_legend=1 00:12:27.142 --rc geninfo_all_blocks=1 00:12:27.142 --rc geninfo_unexecuted_blocks=1 00:12:27.142 00:12:27.142 ' 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:27.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.142 --rc genhtml_branch_coverage=1 00:12:27.142 --rc genhtml_function_coverage=1 00:12:27.142 --rc genhtml_legend=1 00:12:27.142 --rc geninfo_all_blocks=1 00:12:27.142 --rc geninfo_unexecuted_blocks=1 00:12:27.142 00:12:27.142 ' 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:27.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.142 --rc genhtml_branch_coverage=1 00:12:27.142 --rc genhtml_function_coverage=1 00:12:27.142 --rc genhtml_legend=1 00:12:27.142 --rc geninfo_all_blocks=1 00:12:27.142 --rc geninfo_unexecuted_blocks=1 00:12:27.142 00:12:27.142 ' 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:27.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:27.142 --rc genhtml_branch_coverage=1 00:12:27.142 --rc genhtml_function_coverage=1 00:12:27.142 --rc genhtml_legend=1 00:12:27.142 --rc geninfo_all_blocks=1 00:12:27.142 --rc geninfo_unexecuted_blocks=1 00:12:27.142 00:12:27.142 ' 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:27.142 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:27.401 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@50 -- # : 0 00:12:27.401 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:27.401 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:27.401 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:27.401 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:27.401 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:27.401 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:27.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:27.401 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:27.401 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:27.401 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:27.401 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:12:27.401 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:12:27.401 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:12:27.401 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:12:27.402 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:12:27.402 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:12:27.402 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:12:27.402 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:12:27.402 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:27.402 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:27.402 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:12:27.402 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:27.402 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:27.402 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:27.402 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:27.402 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # remove_target_ns 00:12:27.402 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:27.402 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:27.402 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:27.402 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:27.402 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:27.402 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # xtrace_disable 00:12:27.402 08:09:41 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:33.970 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:33.970 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@131 -- # pci_devs=() 00:12:33.970 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:33.970 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:33.970 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:33.970 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:33.970 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:33.970 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@135 -- # net_devs=() 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@136 -- # e810=() 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@136 -- # local -ga e810 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@137 -- # x722=() 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@137 -- # local -ga x722 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@138 -- # mlx=() 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@138 -- # local -ga mlx 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:33.971 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:33.971 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:33.971 Found net devices under 0000:86:00.0: cvl_0_0 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:33.971 Found net devices under 0000:86:00.1: cvl_0_1 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # is_hw=yes 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@247 -- # create_target_ns 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@28 -- # local -g _dev 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # ips=() 00:12:33.971 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:12:33.972 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:33.972 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:12:33.972 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:12:33.972 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:12:33.972 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:12:33.972 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:12:33.972 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:12:33.972 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:12:33.972 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:12:33.972 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:12:33.972 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:12:33.972 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:12:33.972 08:09:46 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772161 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:12:33.972 10.0.0.1 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772162 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:12:33.972 10.0.0.2 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:33.972 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:33.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:33.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.503 ms 00:12:33.972 00:12:33.972 --- 10.0.0.1 ping statistics --- 00:12:33.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.973 rtt min/avg/max/mdev = 0.503/0.503/0.503/0.000 ms 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=target0 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:12:33.973 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:33.973 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:12:33.973 00:12:33.973 --- 10.0.0.2 ping statistics --- 00:12:33.973 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:33.973 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair++ )) 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # return 0 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=initiator1 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # return 1 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev= 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@160 -- # return 0 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=target0 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:33.973 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # get_net_dev target1 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # local dev=target1 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # return 1 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@159 -- # dev= 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@160 -- # return 0 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:12:33.974 ' 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=1596421 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 1596421 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 1596421 ']' 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:33.974 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:33.974 08:09:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:34.231 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.231 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:12:34.231 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:12:34.231 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:34.231 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:12:34.489 08:09:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:46.674 Initializing NVMe Controllers 00:12:46.674 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:46.674 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:12:46.674 Initialization complete. Launching workers. 00:12:46.674 ======================================================== 00:12:46.674 Latency(us) 00:12:46.674 Device Information : IOPS MiB/s Average min max 00:12:46.674 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18231.54 71.22 3511.07 686.10 16276.31 00:12:46.674 ======================================================== 00:12:46.674 Total : 18231.54 71.22 3511.07 686.10 16276.31 00:12:46.674 00:12:46.674 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:12:46.674 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:12:46.674 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:46.674 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@99 -- # sync 00:12:46.674 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # set +e 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:46.675 rmmod nvme_tcp 00:12:46.675 rmmod nvme_fabrics 00:12:46.675 rmmod nvme_keyring 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # set -e 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # return 0 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # '[' -n 1596421 ']' 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@337 -- # killprocess 1596421 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 1596421 ']' 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 1596421 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1596421 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1596421' 00:12:46.675 killing process with pid 1596421 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 1596421 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 1596421 00:12:46.675 nvmf threads initialize successfully 00:12:46.675 bdev subsystem init successfully 00:12:46.675 created a nvmf target service 00:12:46.675 create targets's poll groups done 00:12:46.675 all subsystems of target started 00:12:46.675 nvmf target is running 00:12:46.675 all subsystems of target stopped 00:12:46.675 destroy targets's poll groups done 00:12:46.675 destroyed the nvmf target service 00:12:46.675 bdev subsystem finish successfully 00:12:46.675 nvmf threads destroy successfully 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # nvmf_fini 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@254 -- # local dev 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@257 -- # remove_target_ns 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:46.675 08:09:58 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:47.243 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@258 -- # delete_main_bridge 00:12:47.243 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:47.243 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@121 -- # return 0 00:12:47.243 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # _dev=0 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # dev_map=() 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@274 -- # iptr 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@548 -- # iptables-save 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@548 -- # iptables-restore 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:47.244 00:12:47.244 real 0m20.152s 00:12:47.244 user 0m46.568s 00:12:47.244 sys 0m6.338s 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:12:47.244 ************************************ 00:12:47.244 END TEST nvmf_example 00:12:47.244 ************************************ 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:47.244 ************************************ 00:12:47.244 START TEST nvmf_filesystem 00:12:47.244 ************************************ 00:12:47.244 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:12:47.506 * Looking for test storage... 00:12:47.506 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:47.506 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:47.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.507 --rc genhtml_branch_coverage=1 00:12:47.507 --rc genhtml_function_coverage=1 00:12:47.507 --rc genhtml_legend=1 00:12:47.507 --rc geninfo_all_blocks=1 00:12:47.507 --rc geninfo_unexecuted_blocks=1 00:12:47.507 00:12:47.507 ' 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:47.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.507 --rc genhtml_branch_coverage=1 00:12:47.507 --rc genhtml_function_coverage=1 00:12:47.507 --rc genhtml_legend=1 00:12:47.507 --rc geninfo_all_blocks=1 00:12:47.507 --rc geninfo_unexecuted_blocks=1 00:12:47.507 00:12:47.507 ' 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:47.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.507 --rc genhtml_branch_coverage=1 00:12:47.507 --rc genhtml_function_coverage=1 00:12:47.507 --rc genhtml_legend=1 00:12:47.507 --rc geninfo_all_blocks=1 00:12:47.507 --rc geninfo_unexecuted_blocks=1 00:12:47.507 00:12:47.507 ' 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:47.507 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.507 --rc genhtml_branch_coverage=1 00:12:47.507 --rc genhtml_function_coverage=1 00:12:47.507 --rc genhtml_legend=1 00:12:47.507 --rc geninfo_all_blocks=1 00:12:47.507 --rc geninfo_unexecuted_blocks=1 00:12:47.507 00:12:47.507 ' 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:12:47.507 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:12:47.508 #define SPDK_CONFIG_H 00:12:47.508 #define SPDK_CONFIG_AIO_FSDEV 1 00:12:47.508 #define SPDK_CONFIG_APPS 1 00:12:47.508 #define SPDK_CONFIG_ARCH native 00:12:47.508 #undef SPDK_CONFIG_ASAN 00:12:47.508 #undef SPDK_CONFIG_AVAHI 00:12:47.508 #undef SPDK_CONFIG_CET 00:12:47.508 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:12:47.508 #define SPDK_CONFIG_COVERAGE 1 00:12:47.508 #define SPDK_CONFIG_CROSS_PREFIX 00:12:47.508 #undef SPDK_CONFIG_CRYPTO 00:12:47.508 #undef SPDK_CONFIG_CRYPTO_MLX5 00:12:47.508 #undef SPDK_CONFIG_CUSTOMOCF 00:12:47.508 #undef SPDK_CONFIG_DAOS 00:12:47.508 #define SPDK_CONFIG_DAOS_DIR 00:12:47.508 #define SPDK_CONFIG_DEBUG 1 00:12:47.508 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:12:47.508 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:12:47.508 #define SPDK_CONFIG_DPDK_INC_DIR 00:12:47.508 #define SPDK_CONFIG_DPDK_LIB_DIR 00:12:47.508 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:12:47.508 #undef SPDK_CONFIG_DPDK_UADK 00:12:47.508 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:12:47.508 #define SPDK_CONFIG_EXAMPLES 1 00:12:47.508 #undef SPDK_CONFIG_FC 00:12:47.508 #define SPDK_CONFIG_FC_PATH 00:12:47.508 #define SPDK_CONFIG_FIO_PLUGIN 1 00:12:47.508 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:12:47.508 #define SPDK_CONFIG_FSDEV 1 00:12:47.508 #undef SPDK_CONFIG_FUSE 00:12:47.508 #undef SPDK_CONFIG_FUZZER 00:12:47.508 #define SPDK_CONFIG_FUZZER_LIB 00:12:47.508 #undef SPDK_CONFIG_GOLANG 00:12:47.508 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:12:47.508 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:12:47.508 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:12:47.508 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:12:47.508 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:12:47.508 #undef SPDK_CONFIG_HAVE_LIBBSD 00:12:47.508 #undef SPDK_CONFIG_HAVE_LZ4 00:12:47.508 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:12:47.508 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:12:47.508 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:12:47.508 #define SPDK_CONFIG_IDXD 1 00:12:47.508 #define SPDK_CONFIG_IDXD_KERNEL 1 00:12:47.508 #undef SPDK_CONFIG_IPSEC_MB 00:12:47.508 #define SPDK_CONFIG_IPSEC_MB_DIR 00:12:47.508 #define SPDK_CONFIG_ISAL 1 00:12:47.508 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:12:47.508 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:12:47.508 #define SPDK_CONFIG_LIBDIR 00:12:47.508 #undef SPDK_CONFIG_LTO 00:12:47.508 #define SPDK_CONFIG_MAX_LCORES 128 00:12:47.508 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:12:47.508 #define SPDK_CONFIG_NVME_CUSE 1 00:12:47.508 #undef SPDK_CONFIG_OCF 00:12:47.508 #define SPDK_CONFIG_OCF_PATH 00:12:47.508 #define SPDK_CONFIG_OPENSSL_PATH 00:12:47.508 #undef SPDK_CONFIG_PGO_CAPTURE 00:12:47.508 #define SPDK_CONFIG_PGO_DIR 00:12:47.508 #undef SPDK_CONFIG_PGO_USE 00:12:47.508 #define SPDK_CONFIG_PREFIX /usr/local 00:12:47.508 #undef SPDK_CONFIG_RAID5F 00:12:47.508 #undef SPDK_CONFIG_RBD 00:12:47.508 #define SPDK_CONFIG_RDMA 1 00:12:47.508 #define SPDK_CONFIG_RDMA_PROV verbs 00:12:47.508 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:12:47.508 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:12:47.508 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:12:47.508 #define SPDK_CONFIG_SHARED 1 00:12:47.508 #undef SPDK_CONFIG_SMA 00:12:47.508 #define SPDK_CONFIG_TESTS 1 00:12:47.508 #undef SPDK_CONFIG_TSAN 00:12:47.508 #define SPDK_CONFIG_UBLK 1 00:12:47.508 #define SPDK_CONFIG_UBSAN 1 00:12:47.508 #undef SPDK_CONFIG_UNIT_TESTS 00:12:47.508 #undef SPDK_CONFIG_URING 00:12:47.508 #define SPDK_CONFIG_URING_PATH 00:12:47.508 #undef SPDK_CONFIG_URING_ZNS 00:12:47.508 #undef SPDK_CONFIG_USDT 00:12:47.508 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:12:47.508 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:12:47.508 #define SPDK_CONFIG_VFIO_USER 1 00:12:47.508 #define SPDK_CONFIG_VFIO_USER_DIR 00:12:47.508 #define SPDK_CONFIG_VHOST 1 00:12:47.508 #define SPDK_CONFIG_VIRTIO 1 00:12:47.508 #undef SPDK_CONFIG_VTUNE 00:12:47.508 #define SPDK_CONFIG_VTUNE_DIR 00:12:47.508 #define SPDK_CONFIG_WERROR 1 00:12:47.508 #define SPDK_CONFIG_WPDK_DIR 00:12:47.508 #undef SPDK_CONFIG_XNVME 00:12:47.508 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:47.508 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:12:47.509 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:12:47.510 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 1598827 ]] 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 1598827 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.4W7iHt 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.4W7iHt/tests/target /tmp/spdk.4W7iHt 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189146173440 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963973632 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6817800192 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97970618368 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981984768 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=11366400 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169753088 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192797184 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981390848 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981988864 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=598016 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:12:47.511 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:12:47.512 * Looking for test storage... 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189146173440 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9032392704 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.512 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:12:47.512 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:47.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.772 --rc genhtml_branch_coverage=1 00:12:47.772 --rc genhtml_function_coverage=1 00:12:47.772 --rc genhtml_legend=1 00:12:47.772 --rc geninfo_all_blocks=1 00:12:47.772 --rc geninfo_unexecuted_blocks=1 00:12:47.772 00:12:47.772 ' 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:47.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.772 --rc genhtml_branch_coverage=1 00:12:47.772 --rc genhtml_function_coverage=1 00:12:47.772 --rc genhtml_legend=1 00:12:47.772 --rc geninfo_all_blocks=1 00:12:47.772 --rc geninfo_unexecuted_blocks=1 00:12:47.772 00:12:47.772 ' 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:47.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.772 --rc genhtml_branch_coverage=1 00:12:47.772 --rc genhtml_function_coverage=1 00:12:47.772 --rc genhtml_legend=1 00:12:47.772 --rc geninfo_all_blocks=1 00:12:47.772 --rc geninfo_unexecuted_blocks=1 00:12:47.772 00:12:47.772 ' 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:47.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:47.772 --rc genhtml_branch_coverage=1 00:12:47.772 --rc genhtml_function_coverage=1 00:12:47.772 --rc genhtml_legend=1 00:12:47.772 --rc geninfo_all_blocks=1 00:12:47.772 --rc geninfo_unexecuted_blocks=1 00:12:47.772 00:12:47.772 ' 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.772 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@50 -- # : 0 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:47.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # remove_target_ns 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # xtrace_disable 00:12:47.773 08:10:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@131 -- # pci_devs=() 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@135 -- # net_devs=() 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@136 -- # e810=() 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@136 -- # local -ga e810 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@137 -- # x722=() 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@137 -- # local -ga x722 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@138 -- # mlx=() 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@138 -- # local -ga mlx 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:54.340 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:54.340 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:54.340 Found net devices under 0000:86:00.0: cvl_0_0 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:54.340 Found net devices under 0000:86:00.1: cvl_0_1 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # is_hw=yes 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@247 -- # create_target_ns 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@28 -- # local -g _dev 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # ips=() 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:12:54.340 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772161 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:12:54.341 10.0.0.1 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772162 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:12:54.341 10.0.0.2 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:54.341 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:54.341 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.321 ms 00:12:54.341 00:12:54.341 --- 10.0.0.1 ping statistics --- 00:12:54.341 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.341 rtt min/avg/max/mdev = 0.321/0.321/0.321/0.000 ms 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=target0 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:12:54.341 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:12:54.341 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:54.341 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:12:54.341 00:12:54.341 --- 10.0.0.2 ping statistics --- 00:12:54.342 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:54.342 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # return 0 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # return 1 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev= 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@160 -- # return 0 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=target0 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # local dev=target1 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # return 1 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@159 -- # dev= 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@160 -- # return 0 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:12:54.342 ' 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:54.342 ************************************ 00:12:54.342 START TEST nvmf_filesystem_no_in_capsule 00:12:54.342 ************************************ 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=1601983 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 1601983 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1601983 ']' 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:54.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:54.342 08:10:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.342 [2024-11-20 08:10:07.889822] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:12:54.342 [2024-11-20 08:10:07.889868] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:54.342 [2024-11-20 08:10:07.969072] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:54.342 [2024-11-20 08:10:08.011680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:54.343 [2024-11-20 08:10:08.011717] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:54.343 [2024-11-20 08:10:08.011725] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:54.343 [2024-11-20 08:10:08.011731] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:54.343 [2024-11-20 08:10:08.011736] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:54.343 [2024-11-20 08:10:08.013293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:54.343 [2024-11-20 08:10:08.013400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:54.343 [2024-11-20 08:10:08.013506] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:54.343 [2024-11-20 08:10:08.013507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.343 [2024-11-20 08:10:08.146199] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.343 Malloc1 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.343 [2024-11-20 08:10:08.293195] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:12:54.343 { 00:12:54.343 "name": "Malloc1", 00:12:54.343 "aliases": [ 00:12:54.343 "e36cf803-2860-472f-b9be-a219fab191ec" 00:12:54.343 ], 00:12:54.343 "product_name": "Malloc disk", 00:12:54.343 "block_size": 512, 00:12:54.343 "num_blocks": 1048576, 00:12:54.343 "uuid": "e36cf803-2860-472f-b9be-a219fab191ec", 00:12:54.343 "assigned_rate_limits": { 00:12:54.343 "rw_ios_per_sec": 0, 00:12:54.343 "rw_mbytes_per_sec": 0, 00:12:54.343 "r_mbytes_per_sec": 0, 00:12:54.343 "w_mbytes_per_sec": 0 00:12:54.343 }, 00:12:54.343 "claimed": true, 00:12:54.343 "claim_type": "exclusive_write", 00:12:54.343 "zoned": false, 00:12:54.343 "supported_io_types": { 00:12:54.343 "read": true, 00:12:54.343 "write": true, 00:12:54.343 "unmap": true, 00:12:54.343 "flush": true, 00:12:54.343 "reset": true, 00:12:54.343 "nvme_admin": false, 00:12:54.343 "nvme_io": false, 00:12:54.343 "nvme_io_md": false, 00:12:54.343 "write_zeroes": true, 00:12:54.343 "zcopy": true, 00:12:54.343 "get_zone_info": false, 00:12:54.343 "zone_management": false, 00:12:54.343 "zone_append": false, 00:12:54.343 "compare": false, 00:12:54.343 "compare_and_write": false, 00:12:54.343 "abort": true, 00:12:54.343 "seek_hole": false, 00:12:54.343 "seek_data": false, 00:12:54.343 "copy": true, 00:12:54.343 "nvme_iov_md": false 00:12:54.343 }, 00:12:54.343 "memory_domains": [ 00:12:54.343 { 00:12:54.343 "dma_device_id": "system", 00:12:54.343 "dma_device_type": 1 00:12:54.343 }, 00:12:54.343 { 00:12:54.343 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.343 "dma_device_type": 2 00:12:54.343 } 00:12:54.343 ], 00:12:54.343 "driver_specific": {} 00:12:54.343 } 00:12:54.343 ]' 00:12:54.343 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:12:54.601 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:12:54.601 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:12:54.601 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:12:54.601 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:12:54.601 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:12:54.601 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:12:54.601 08:10:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:55.536 08:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:12:55.536 08:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:12:55.536 08:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:55.536 08:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:55.536 08:10:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:12:58.086 08:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:58.086 08:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:58.086 08:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:58.086 08:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:58.086 08:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:58.086 08:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:12:58.086 08:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:12:58.086 08:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:12:58.086 08:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:12:58.086 08:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:12:58.086 08:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:12:58.086 08:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:12:58.086 08:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:12:58.086 08:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:12:58.086 08:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:12:58.086 08:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:12:58.086 08:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:12:58.086 08:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:12:58.086 08:10:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:12:59.018 08:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:12:59.018 08:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:12:59.018 08:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:59.018 08:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:59.018 08:10:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:59.018 ************************************ 00:12:59.018 START TEST filesystem_ext4 00:12:59.018 ************************************ 00:12:59.018 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:12:59.018 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:12:59.018 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:59.018 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:12:59.018 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:12:59.018 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:59.018 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:12:59.018 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:12:59.018 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:12:59.018 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:12:59.018 08:10:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:12:59.018 mke2fs 1.47.0 (5-Feb-2023) 00:12:59.276 Discarding device blocks: 0/522240 done 00:12:59.276 Creating filesystem with 522240 1k blocks and 130560 inodes 00:12:59.276 Filesystem UUID: 1dcea39c-5f3f-41c0-98a2-386c3726f9ba 00:12:59.276 Superblock backups stored on blocks: 00:12:59.276 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:12:59.276 00:12:59.276 Allocating group tables: 0/64 done 00:12:59.276 Writing inode tables: 0/64 done 00:13:01.174 Creating journal (8192 blocks): done 00:13:01.996 Writing superblocks and filesystem accounting information: 0/64 done 00:13:01.996 00:13:01.996 08:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:01.996 08:10:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:08.551 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:08.551 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:13:08.551 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:08.551 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:13:08.551 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:08.551 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:08.551 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 1601983 00:13:08.551 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:08.551 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:08.551 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:08.551 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:08.551 00:13:08.551 real 0m8.410s 00:13:08.551 user 0m0.030s 00:13:08.551 sys 0m0.072s 00:13:08.551 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.551 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:08.551 ************************************ 00:13:08.551 END TEST filesystem_ext4 00:13:08.551 ************************************ 00:13:08.551 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:08.551 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:08.551 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.551 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:08.551 ************************************ 00:13:08.551 START TEST filesystem_btrfs 00:13:08.551 ************************************ 00:13:08.551 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:08.551 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:08.551 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:08.552 btrfs-progs v6.8.1 00:13:08.552 See https://btrfs.readthedocs.io for more information. 00:13:08.552 00:13:08.552 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:08.552 NOTE: several default settings have changed in version 5.15, please make sure 00:13:08.552 this does not affect your deployments: 00:13:08.552 - DUP for metadata (-m dup) 00:13:08.552 - enabled no-holes (-O no-holes) 00:13:08.552 - enabled free-space-tree (-R free-space-tree) 00:13:08.552 00:13:08.552 Label: (null) 00:13:08.552 UUID: 7099412e-aee5-4d02-80ae-eae6e3a6f02c 00:13:08.552 Node size: 16384 00:13:08.552 Sector size: 4096 (CPU page size: 4096) 00:13:08.552 Filesystem size: 510.00MiB 00:13:08.552 Block group profiles: 00:13:08.552 Data: single 8.00MiB 00:13:08.552 Metadata: DUP 32.00MiB 00:13:08.552 System: DUP 8.00MiB 00:13:08.552 SSD detected: yes 00:13:08.552 Zoned device: no 00:13:08.552 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:08.552 Checksum: crc32c 00:13:08.552 Number of devices: 1 00:13:08.552 Devices: 00:13:08.552 ID SIZE PATH 00:13:08.552 1 510.00MiB /dev/nvme0n1p1 00:13:08.552 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 1601983 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:08.552 00:13:08.552 real 0m0.497s 00:13:08.552 user 0m0.024s 00:13:08.552 sys 0m0.117s 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:08.552 08:10:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:08.552 ************************************ 00:13:08.552 END TEST filesystem_btrfs 00:13:08.552 ************************************ 00:13:08.552 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:13:08.552 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:08.552 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:08.552 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:08.552 ************************************ 00:13:08.552 START TEST filesystem_xfs 00:13:08.552 ************************************ 00:13:08.552 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:08.552 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:08.552 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:08.552 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:08.552 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:08.552 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:08.552 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:08.552 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:13:08.552 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:08.552 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:08.552 08:10:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:08.552 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:08.552 = sectsz=512 attr=2, projid32bit=1 00:13:08.552 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:08.552 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:08.552 data = bsize=4096 blocks=130560, imaxpct=25 00:13:08.552 = sunit=0 swidth=0 blks 00:13:08.552 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:08.552 log =internal log bsize=4096 blocks=16384, version=2 00:13:08.552 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:08.552 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:09.134 Discarding blocks...Done. 00:13:09.134 08:10:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:09.134 08:10:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:11.127 08:10:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:11.127 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:13:11.127 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:11.127 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:13:11.127 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:13:11.127 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:11.127 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 1601983 00:13:11.127 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:11.127 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:11.127 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:11.127 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:11.127 00:13:11.127 real 0m3.012s 00:13:11.127 user 0m0.023s 00:13:11.127 sys 0m0.077s 00:13:11.127 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.127 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:11.127 ************************************ 00:13:11.127 END TEST filesystem_xfs 00:13:11.127 ************************************ 00:13:11.127 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:11.385 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:11.386 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:11.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 1601983 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1601983 ']' 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1601983 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1601983 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1601983' 00:13:11.645 killing process with pid 1601983 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 1601983 00:13:11.645 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 1601983 00:13:11.905 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:11.905 00:13:11.905 real 0m18.018s 00:13:11.905 user 1m10.835s 00:13:11.905 sys 0m1.474s 00:13:11.905 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:11.905 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:11.905 ************************************ 00:13:11.905 END TEST nvmf_filesystem_no_in_capsule 00:13:11.905 ************************************ 00:13:11.905 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:13:11.905 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:11.905 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:11.905 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:11.905 ************************************ 00:13:11.905 START TEST nvmf_filesystem_in_capsule 00:13:11.905 ************************************ 00:13:11.905 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:13:11.905 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:13:11.905 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:13:11.905 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:11.905 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:11.905 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.163 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=1605126 00:13:12.163 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 1605126 00:13:12.163 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:12.163 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 1605126 ']' 00:13:12.163 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:12.163 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:12.163 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:12.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:12.163 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:12.163 08:10:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.163 [2024-11-20 08:10:25.982632] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:13:12.163 [2024-11-20 08:10:25.982678] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:12.163 [2024-11-20 08:10:26.065035] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:12.163 [2024-11-20 08:10:26.107351] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:12.163 [2024-11-20 08:10:26.107389] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:12.163 [2024-11-20 08:10:26.107396] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:12.163 [2024-11-20 08:10:26.107406] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:12.163 [2024-11-20 08:10:26.107411] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:12.163 [2024-11-20 08:10:26.108902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:12.163 [2024-11-20 08:10:26.109008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:12.163 [2024-11-20 08:10:26.109123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:12.163 [2024-11-20 08:10:26.109124] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:12.420 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:12.420 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:13:12.420 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:12.420 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:12.420 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.421 [2024-11-20 08:10:26.253055] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.421 Malloc1 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.421 [2024-11-20 08:10:26.402265] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:13:12.421 { 00:13:12.421 "name": "Malloc1", 00:13:12.421 "aliases": [ 00:13:12.421 "4c2c6728-9c40-4437-a169-ba67156bcebf" 00:13:12.421 ], 00:13:12.421 "product_name": "Malloc disk", 00:13:12.421 "block_size": 512, 00:13:12.421 "num_blocks": 1048576, 00:13:12.421 "uuid": "4c2c6728-9c40-4437-a169-ba67156bcebf", 00:13:12.421 "assigned_rate_limits": { 00:13:12.421 "rw_ios_per_sec": 0, 00:13:12.421 "rw_mbytes_per_sec": 0, 00:13:12.421 "r_mbytes_per_sec": 0, 00:13:12.421 "w_mbytes_per_sec": 0 00:13:12.421 }, 00:13:12.421 "claimed": true, 00:13:12.421 "claim_type": "exclusive_write", 00:13:12.421 "zoned": false, 00:13:12.421 "supported_io_types": { 00:13:12.421 "read": true, 00:13:12.421 "write": true, 00:13:12.421 "unmap": true, 00:13:12.421 "flush": true, 00:13:12.421 "reset": true, 00:13:12.421 "nvme_admin": false, 00:13:12.421 "nvme_io": false, 00:13:12.421 "nvme_io_md": false, 00:13:12.421 "write_zeroes": true, 00:13:12.421 "zcopy": true, 00:13:12.421 "get_zone_info": false, 00:13:12.421 "zone_management": false, 00:13:12.421 "zone_append": false, 00:13:12.421 "compare": false, 00:13:12.421 "compare_and_write": false, 00:13:12.421 "abort": true, 00:13:12.421 "seek_hole": false, 00:13:12.421 "seek_data": false, 00:13:12.421 "copy": true, 00:13:12.421 "nvme_iov_md": false 00:13:12.421 }, 00:13:12.421 "memory_domains": [ 00:13:12.421 { 00:13:12.421 "dma_device_id": "system", 00:13:12.421 "dma_device_type": 1 00:13:12.421 }, 00:13:12.421 { 00:13:12.421 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.421 "dma_device_type": 2 00:13:12.421 } 00:13:12.421 ], 00:13:12.421 "driver_specific": {} 00:13:12.421 } 00:13:12.421 ]' 00:13:12.421 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:13:12.678 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:13:12.678 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:13:12.678 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:13:12.678 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:13:12.678 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:13:12.678 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:13:12.678 08:10:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:14.047 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:13:14.047 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:13:14.047 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:14.047 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:14.047 08:10:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:13:15.940 08:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:15.940 08:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:15.940 08:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.940 08:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:15.940 08:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.941 08:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:13:15.941 08:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:13:15.941 08:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:13:15.941 08:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:13:15.941 08:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:13:15.941 08:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:13:15.941 08:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:13:15.941 08:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:13:15.941 08:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:13:15.941 08:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:13:15.941 08:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:13:15.941 08:10:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:13:16.198 08:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:13:16.760 08:10:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:13:18.128 08:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:13:18.128 08:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:13:18.128 08:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:18.128 08:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:18.128 08:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:18.128 ************************************ 00:13:18.128 START TEST filesystem_in_capsule_ext4 00:13:18.128 ************************************ 00:13:18.128 08:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:13:18.128 08:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:13:18.128 08:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:18.128 08:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:13:18.128 08:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:13:18.128 08:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:18.128 08:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:13:18.128 08:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:13:18.128 08:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:13:18.128 08:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:13:18.128 08:10:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:13:18.128 mke2fs 1.47.0 (5-Feb-2023) 00:13:18.128 Discarding device blocks: 0/522240 done 00:13:18.128 Creating filesystem with 522240 1k blocks and 130560 inodes 00:13:18.128 Filesystem UUID: c4464776-14a3-443e-9e66-02b005ba3bb4 00:13:18.128 Superblock backups stored on blocks: 00:13:18.128 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:13:18.128 00:13:18.128 Allocating group tables: 0/64 done 00:13:18.128 Writing inode tables: 0/64 done 00:13:20.906 Creating journal (8192 blocks): done 00:13:23.205 Writing superblocks and filesystem accounting information: 0/64 done 00:13:23.205 00:13:23.205 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:13:23.205 08:10:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:29.752 08:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:29.752 08:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:13:29.752 08:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:29.752 08:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:13:29.752 08:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:13:29.752 08:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:29.752 08:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 1605126 00:13:29.752 08:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:29.752 08:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:29.752 08:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:29.752 08:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:29.752 00:13:29.752 real 0m11.196s 00:13:29.752 user 0m0.021s 00:13:29.752 sys 0m0.087s 00:13:29.752 08:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:29.752 08:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:13:29.752 ************************************ 00:13:29.752 END TEST filesystem_in_capsule_ext4 00:13:29.752 ************************************ 00:13:29.752 08:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:13:29.752 08:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:29.752 08:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:29.752 08:10:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:29.752 ************************************ 00:13:29.752 START TEST filesystem_in_capsule_btrfs 00:13:29.752 ************************************ 00:13:29.752 08:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:13:29.752 08:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:13:29.752 08:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:29.752 08:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:13:29.752 08:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:13:29.752 08:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:29.752 08:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:13:29.752 08:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:13:29.752 08:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:13:29.752 08:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:13:29.752 08:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:13:29.752 btrfs-progs v6.8.1 00:13:29.752 See https://btrfs.readthedocs.io for more information. 00:13:29.752 00:13:29.752 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:13:29.752 NOTE: several default settings have changed in version 5.15, please make sure 00:13:29.752 this does not affect your deployments: 00:13:29.752 - DUP for metadata (-m dup) 00:13:29.752 - enabled no-holes (-O no-holes) 00:13:29.752 - enabled free-space-tree (-R free-space-tree) 00:13:29.752 00:13:29.752 Label: (null) 00:13:29.752 UUID: d825f463-bf4a-4e7a-afa9-e3e4ac53eb29 00:13:29.752 Node size: 16384 00:13:29.752 Sector size: 4096 (CPU page size: 4096) 00:13:29.752 Filesystem size: 510.00MiB 00:13:29.752 Block group profiles: 00:13:29.752 Data: single 8.00MiB 00:13:29.752 Metadata: DUP 32.00MiB 00:13:29.752 System: DUP 8.00MiB 00:13:29.752 SSD detected: yes 00:13:29.752 Zoned device: no 00:13:29.752 Features: extref, skinny-metadata, no-holes, free-space-tree 00:13:29.752 Checksum: crc32c 00:13:29.752 Number of devices: 1 00:13:29.752 Devices: 00:13:29.752 ID SIZE PATH 00:13:29.752 1 510.00MiB /dev/nvme0n1p1 00:13:29.752 00:13:29.752 08:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:13:29.752 08:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:30.316 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:30.316 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:13:30.316 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:30.316 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:13:30.316 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:13:30.316 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:30.316 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 1605126 00:13:30.316 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:30.316 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:30.573 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:30.573 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:30.573 00:13:30.573 real 0m1.325s 00:13:30.573 user 0m0.040s 00:13:30.573 sys 0m0.101s 00:13:30.573 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:30.573 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:13:30.573 ************************************ 00:13:30.573 END TEST filesystem_in_capsule_btrfs 00:13:30.573 ************************************ 00:13:30.573 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:13:30.573 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:13:30.573 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:30.573 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:30.573 ************************************ 00:13:30.573 START TEST filesystem_in_capsule_xfs 00:13:30.573 ************************************ 00:13:30.573 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:13:30.573 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:13:30.573 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:13:30.573 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:13:30.573 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:13:30.573 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:13:30.573 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:13:30.573 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:13:30.573 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:13:30.573 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:13:30.573 08:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:13:30.573 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:13:30.573 = sectsz=512 attr=2, projid32bit=1 00:13:30.573 = crc=1 finobt=1, sparse=1, rmapbt=0 00:13:30.573 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:13:30.574 data = bsize=4096 blocks=130560, imaxpct=25 00:13:30.574 = sunit=0 swidth=0 blks 00:13:30.574 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:13:30.574 log =internal log bsize=4096 blocks=16384, version=2 00:13:30.574 = sectsz=512 sunit=0 blks, lazy-count=1 00:13:30.574 realtime =none extsz=4096 blocks=0, rtextents=0 00:13:31.504 Discarding blocks...Done. 00:13:31.504 08:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:13:31.504 08:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:13:34.024 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:13:34.024 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:13:34.024 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:13:34.024 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:13:34.024 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:13:34.025 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:13:34.025 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 1605126 00:13:34.025 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:13:34.025 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:13:34.025 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:13:34.025 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:13:34.025 00:13:34.025 real 0m3.464s 00:13:34.025 user 0m0.022s 00:13:34.025 sys 0m0.075s 00:13:34.025 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.025 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:13:34.025 ************************************ 00:13:34.025 END TEST filesystem_in_capsule_xfs 00:13:34.025 ************************************ 00:13:34.025 08:10:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:13:34.282 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:13:34.282 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:34.282 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 1605126 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 1605126 ']' 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 1605126 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1605126 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1605126' 00:13:34.539 killing process with pid 1605126 00:13:34.539 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 1605126 00:13:34.540 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 1605126 00:13:34.798 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:13:34.798 00:13:34.798 real 0m22.807s 00:13:34.798 user 1m29.857s 00:13:34.798 sys 0m1.558s 00:13:34.798 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:34.798 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:13:34.798 ************************************ 00:13:34.798 END TEST nvmf_filesystem_in_capsule 00:13:34.798 ************************************ 00:13:34.798 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:13:34.798 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:34.798 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@99 -- # sync 00:13:34.798 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:34.798 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # set +e 00:13:34.798 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:34.798 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:34.798 rmmod nvme_tcp 00:13:34.798 rmmod nvme_fabrics 00:13:34.798 rmmod nvme_keyring 00:13:35.057 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:35.057 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # set -e 00:13:35.057 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # return 0 00:13:35.057 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:13:35.057 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:35.057 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # nvmf_fini 00:13:35.057 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@254 -- # local dev 00:13:35.057 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@257 -- # remove_target_ns 00:13:35.057 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:35.057 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:35.057 08:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@258 -- # delete_main_bridge 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@121 -- # return 0 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # _dev=0 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # dev_map=() 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@274 -- # iptr 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@548 -- # iptables-save 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@548 -- # iptables-restore 00:13:36.961 00:13:36.961 real 0m49.731s 00:13:36.961 user 2m42.788s 00:13:36.961 sys 0m7.862s 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:13:36.961 ************************************ 00:13:36.961 END TEST nvmf_filesystem 00:13:36.961 ************************************ 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:36.961 08:10:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:37.221 ************************************ 00:13:37.221 START TEST nvmf_target_discovery 00:13:37.221 ************************************ 00:13:37.221 08:10:50 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:13:37.221 * Looking for test storage... 00:13:37.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:37.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.221 --rc genhtml_branch_coverage=1 00:13:37.221 --rc genhtml_function_coverage=1 00:13:37.221 --rc genhtml_legend=1 00:13:37.221 --rc geninfo_all_blocks=1 00:13:37.221 --rc geninfo_unexecuted_blocks=1 00:13:37.221 00:13:37.221 ' 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:37.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.221 --rc genhtml_branch_coverage=1 00:13:37.221 --rc genhtml_function_coverage=1 00:13:37.221 --rc genhtml_legend=1 00:13:37.221 --rc geninfo_all_blocks=1 00:13:37.221 --rc geninfo_unexecuted_blocks=1 00:13:37.221 00:13:37.221 ' 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:37.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.221 --rc genhtml_branch_coverage=1 00:13:37.221 --rc genhtml_function_coverage=1 00:13:37.221 --rc genhtml_legend=1 00:13:37.221 --rc geninfo_all_blocks=1 00:13:37.221 --rc geninfo_unexecuted_blocks=1 00:13:37.221 00:13:37.221 ' 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:37.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:37.221 --rc genhtml_branch_coverage=1 00:13:37.221 --rc genhtml_function_coverage=1 00:13:37.221 --rc genhtml_legend=1 00:13:37.221 --rc geninfo_all_blocks=1 00:13:37.221 --rc geninfo_unexecuted_blocks=1 00:13:37.221 00:13:37.221 ' 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:37.221 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@50 -- # : 0 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:37.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # xtrace_disable 00:13:37.222 08:10:51 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@131 -- # pci_devs=() 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@135 -- # net_devs=() 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@136 -- # e810=() 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@136 -- # local -ga e810 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@137 -- # x722=() 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@137 -- # local -ga x722 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@138 -- # mlx=() 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@138 -- # local -ga mlx 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:43.791 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:43.791 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:43.792 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:43.792 Found net devices under 0000:86:00.0: cvl_0_0 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:43.792 Found net devices under 0000:86:00.1: cvl_0_1 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # is_hw=yes 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@247 -- # create_target_ns 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # ips=() 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:13:43.792 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:43.793 10.0.0.1 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:43.793 10.0.0.2 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:13:43.793 08:10:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:43.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:43.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.432 ms 00:13:43.793 00:13:43.793 --- 10.0.0.1 ping statistics --- 00:13:43.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.793 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:43.793 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:13:43.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:43.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:13:43.794 00:13:43.794 --- 10.0.0.2 ping statistics --- 00:13:43.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:43.794 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # return 0 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # return 1 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev= 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@160 -- # return 0 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # return 1 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@159 -- # dev= 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@160 -- # return 0 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:13:43.794 ' 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # nvmfpid=1612703 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # waitforlisten 1612703 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 1612703 ']' 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:43.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.794 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.794 [2024-11-20 08:10:57.308034] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:13:43.795 [2024-11-20 08:10:57.308088] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:43.795 [2024-11-20 08:10:57.389245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:43.795 [2024-11-20 08:10:57.430327] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:43.795 [2024-11-20 08:10:57.430367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:43.795 [2024-11-20 08:10:57.430374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:43.795 [2024-11-20 08:10:57.430380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:43.795 [2024-11-20 08:10:57.430385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:43.795 [2024-11-20 08:10:57.431985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:43.795 [2024-11-20 08:10:57.432093] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:43.795 [2024-11-20 08:10:57.432207] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:43.795 [2024-11-20 08:10:57.432216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 [2024-11-20 08:10:57.576820] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 Null1 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 [2024-11-20 08:10:57.622199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 Null2 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 Null3 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 Null4 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:13:43.795 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:43.796 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:43.796 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:43.796 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:44.055 00:13:44.055 Discovery Log Number of Records 6, Generation counter 6 00:13:44.055 =====Discovery Log Entry 0====== 00:13:44.055 trtype: tcp 00:13:44.055 adrfam: ipv4 00:13:44.055 subtype: current discovery subsystem 00:13:44.055 treq: not required 00:13:44.055 portid: 0 00:13:44.055 trsvcid: 4420 00:13:44.055 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:44.055 traddr: 10.0.0.2 00:13:44.055 eflags: explicit discovery connections, duplicate discovery information 00:13:44.055 sectype: none 00:13:44.055 =====Discovery Log Entry 1====== 00:13:44.055 trtype: tcp 00:13:44.055 adrfam: ipv4 00:13:44.055 subtype: nvme subsystem 00:13:44.055 treq: not required 00:13:44.055 portid: 0 00:13:44.055 trsvcid: 4420 00:13:44.055 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:44.055 traddr: 10.0.0.2 00:13:44.055 eflags: none 00:13:44.055 sectype: none 00:13:44.055 =====Discovery Log Entry 2====== 00:13:44.055 trtype: tcp 00:13:44.055 adrfam: ipv4 00:13:44.055 subtype: nvme subsystem 00:13:44.055 treq: not required 00:13:44.055 portid: 0 00:13:44.055 trsvcid: 4420 00:13:44.055 subnqn: nqn.2016-06.io.spdk:cnode2 00:13:44.055 traddr: 10.0.0.2 00:13:44.055 eflags: none 00:13:44.055 sectype: none 00:13:44.055 =====Discovery Log Entry 3====== 00:13:44.055 trtype: tcp 00:13:44.055 adrfam: ipv4 00:13:44.055 subtype: nvme subsystem 00:13:44.055 treq: not required 00:13:44.055 portid: 0 00:13:44.055 trsvcid: 4420 00:13:44.055 subnqn: nqn.2016-06.io.spdk:cnode3 00:13:44.055 traddr: 10.0.0.2 00:13:44.055 eflags: none 00:13:44.055 sectype: none 00:13:44.055 =====Discovery Log Entry 4====== 00:13:44.055 trtype: tcp 00:13:44.055 adrfam: ipv4 00:13:44.055 subtype: nvme subsystem 00:13:44.055 treq: not required 00:13:44.055 portid: 0 00:13:44.055 trsvcid: 4420 00:13:44.055 subnqn: nqn.2016-06.io.spdk:cnode4 00:13:44.055 traddr: 10.0.0.2 00:13:44.055 eflags: none 00:13:44.055 sectype: none 00:13:44.055 =====Discovery Log Entry 5====== 00:13:44.055 trtype: tcp 00:13:44.055 adrfam: ipv4 00:13:44.055 subtype: discovery subsystem referral 00:13:44.055 treq: not required 00:13:44.055 portid: 0 00:13:44.055 trsvcid: 4430 00:13:44.055 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:44.055 traddr: 10.0.0.2 00:13:44.055 eflags: none 00:13:44.055 sectype: none 00:13:44.055 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:13:44.055 Perform nvmf subsystem discovery via RPC 00:13:44.055 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:13:44.055 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.055 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:44.055 [ 00:13:44.055 { 00:13:44.055 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:44.055 "subtype": "Discovery", 00:13:44.055 "listen_addresses": [ 00:13:44.055 { 00:13:44.055 "trtype": "TCP", 00:13:44.055 "adrfam": "IPv4", 00:13:44.055 "traddr": "10.0.0.2", 00:13:44.055 "trsvcid": "4420" 00:13:44.055 } 00:13:44.055 ], 00:13:44.055 "allow_any_host": true, 00:13:44.055 "hosts": [] 00:13:44.055 }, 00:13:44.055 { 00:13:44.055 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:44.055 "subtype": "NVMe", 00:13:44.055 "listen_addresses": [ 00:13:44.055 { 00:13:44.055 "trtype": "TCP", 00:13:44.055 "adrfam": "IPv4", 00:13:44.055 "traddr": "10.0.0.2", 00:13:44.055 "trsvcid": "4420" 00:13:44.055 } 00:13:44.055 ], 00:13:44.055 "allow_any_host": true, 00:13:44.055 "hosts": [], 00:13:44.055 "serial_number": "SPDK00000000000001", 00:13:44.055 "model_number": "SPDK bdev Controller", 00:13:44.055 "max_namespaces": 32, 00:13:44.055 "min_cntlid": 1, 00:13:44.055 "max_cntlid": 65519, 00:13:44.055 "namespaces": [ 00:13:44.055 { 00:13:44.055 "nsid": 1, 00:13:44.055 "bdev_name": "Null1", 00:13:44.055 "name": "Null1", 00:13:44.055 "nguid": "8C010D3D6D324C369BF86D0265EC1065", 00:13:44.055 "uuid": "8c010d3d-6d32-4c36-9bf8-6d0265ec1065" 00:13:44.055 } 00:13:44.055 ] 00:13:44.055 }, 00:13:44.055 { 00:13:44.055 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:13:44.055 "subtype": "NVMe", 00:13:44.055 "listen_addresses": [ 00:13:44.055 { 00:13:44.055 "trtype": "TCP", 00:13:44.055 "adrfam": "IPv4", 00:13:44.055 "traddr": "10.0.0.2", 00:13:44.055 "trsvcid": "4420" 00:13:44.055 } 00:13:44.055 ], 00:13:44.055 "allow_any_host": true, 00:13:44.055 "hosts": [], 00:13:44.055 "serial_number": "SPDK00000000000002", 00:13:44.055 "model_number": "SPDK bdev Controller", 00:13:44.055 "max_namespaces": 32, 00:13:44.055 "min_cntlid": 1, 00:13:44.055 "max_cntlid": 65519, 00:13:44.055 "namespaces": [ 00:13:44.055 { 00:13:44.055 "nsid": 1, 00:13:44.055 "bdev_name": "Null2", 00:13:44.055 "name": "Null2", 00:13:44.055 "nguid": "9A88016F4DD6480E83D99954E0FB9BA3", 00:13:44.056 "uuid": "9a88016f-4dd6-480e-83d9-9954e0fb9ba3" 00:13:44.056 } 00:13:44.056 ] 00:13:44.056 }, 00:13:44.056 { 00:13:44.056 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:13:44.056 "subtype": "NVMe", 00:13:44.056 "listen_addresses": [ 00:13:44.056 { 00:13:44.056 "trtype": "TCP", 00:13:44.056 "adrfam": "IPv4", 00:13:44.056 "traddr": "10.0.0.2", 00:13:44.056 "trsvcid": "4420" 00:13:44.056 } 00:13:44.056 ], 00:13:44.056 "allow_any_host": true, 00:13:44.056 "hosts": [], 00:13:44.056 "serial_number": "SPDK00000000000003", 00:13:44.056 "model_number": "SPDK bdev Controller", 00:13:44.056 "max_namespaces": 32, 00:13:44.056 "min_cntlid": 1, 00:13:44.056 "max_cntlid": 65519, 00:13:44.056 "namespaces": [ 00:13:44.056 { 00:13:44.056 "nsid": 1, 00:13:44.056 "bdev_name": "Null3", 00:13:44.056 "name": "Null3", 00:13:44.056 "nguid": "1D97D2B4C7934DC8B4A4F7FD642F8D78", 00:13:44.056 "uuid": "1d97d2b4-c793-4dc8-b4a4-f7fd642f8d78" 00:13:44.056 } 00:13:44.056 ] 00:13:44.056 }, 00:13:44.056 { 00:13:44.056 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:13:44.056 "subtype": "NVMe", 00:13:44.056 "listen_addresses": [ 00:13:44.056 { 00:13:44.056 "trtype": "TCP", 00:13:44.056 "adrfam": "IPv4", 00:13:44.056 "traddr": "10.0.0.2", 00:13:44.056 "trsvcid": "4420" 00:13:44.056 } 00:13:44.056 ], 00:13:44.056 "allow_any_host": true, 00:13:44.056 "hosts": [], 00:13:44.056 "serial_number": "SPDK00000000000004", 00:13:44.056 "model_number": "SPDK bdev Controller", 00:13:44.056 "max_namespaces": 32, 00:13:44.056 "min_cntlid": 1, 00:13:44.056 "max_cntlid": 65519, 00:13:44.056 "namespaces": [ 00:13:44.056 { 00:13:44.056 "nsid": 1, 00:13:44.056 "bdev_name": "Null4", 00:13:44.056 "name": "Null4", 00:13:44.056 "nguid": "DDC08E1A8641455A8C884F6BB00E2B17", 00:13:44.056 "uuid": "ddc08e1a-8641-455a-8c88-4f6bb00e2b17" 00:13:44.056 } 00:13:44.056 ] 00:13:44.056 } 00:13:44.056 ] 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:44.056 08:10:57 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:44.056 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:44.056 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:13:44.056 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:13:44.056 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:13:44.056 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:13:44.056 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:44.056 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@99 -- # sync 00:13:44.056 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:44.056 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # set +e 00:13:44.056 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:44.056 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:44.056 rmmod nvme_tcp 00:13:44.056 rmmod nvme_fabrics 00:13:44.056 rmmod nvme_keyring 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # set -e 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # return 0 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # '[' -n 1612703 ']' 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@337 -- # killprocess 1612703 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 1612703 ']' 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 1612703 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1612703 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1612703' 00:13:44.316 killing process with pid 1612703 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 1612703 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 1612703 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@254 -- # local dev 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@257 -- # remove_target_ns 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:44.316 08:10:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:46.852 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@258 -- # delete_main_bridge 00:13:46.852 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:46.852 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@121 -- # return 0 00:13:46.852 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:46.852 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:13:46.852 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:46.852 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:13:46.852 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:13:46.852 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:46.852 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:13:46.852 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:13:46.852 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@274 -- # iptr 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@548 -- # iptables-save 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@548 -- # iptables-restore 00:13:46.853 00:13:46.853 real 0m9.404s 00:13:46.853 user 0m5.447s 00:13:46.853 sys 0m4.921s 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:13:46.853 ************************************ 00:13:46.853 END TEST nvmf_target_discovery 00:13:46.853 ************************************ 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:46.853 ************************************ 00:13:46.853 START TEST nvmf_referrals 00:13:46.853 ************************************ 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:13:46.853 * Looking for test storage... 00:13:46.853 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:46.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.853 --rc genhtml_branch_coverage=1 00:13:46.853 --rc genhtml_function_coverage=1 00:13:46.853 --rc genhtml_legend=1 00:13:46.853 --rc geninfo_all_blocks=1 00:13:46.853 --rc geninfo_unexecuted_blocks=1 00:13:46.853 00:13:46.853 ' 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:46.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.853 --rc genhtml_branch_coverage=1 00:13:46.853 --rc genhtml_function_coverage=1 00:13:46.853 --rc genhtml_legend=1 00:13:46.853 --rc geninfo_all_blocks=1 00:13:46.853 --rc geninfo_unexecuted_blocks=1 00:13:46.853 00:13:46.853 ' 00:13:46.853 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:46.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.853 --rc genhtml_branch_coverage=1 00:13:46.854 --rc genhtml_function_coverage=1 00:13:46.854 --rc genhtml_legend=1 00:13:46.854 --rc geninfo_all_blocks=1 00:13:46.854 --rc geninfo_unexecuted_blocks=1 00:13:46.854 00:13:46.854 ' 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:46.854 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:46.854 --rc genhtml_branch_coverage=1 00:13:46.854 --rc genhtml_function_coverage=1 00:13:46.854 --rc genhtml_legend=1 00:13:46.854 --rc geninfo_all_blocks=1 00:13:46.854 --rc geninfo_unexecuted_blocks=1 00:13:46.854 00:13:46.854 ' 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@50 -- # : 0 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:46.854 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:13:46.854 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:46.855 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:46.855 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:46.855 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:46.855 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # remove_target_ns 00:13:46.855 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:46.855 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:46.855 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:46.855 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:13:46.855 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:13:46.855 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # xtrace_disable 00:13:46.855 08:11:00 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@131 -- # pci_devs=() 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@135 -- # net_devs=() 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@136 -- # e810=() 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@136 -- # local -ga e810 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@137 -- # x722=() 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@137 -- # local -ga x722 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@138 -- # mlx=() 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@138 -- # local -ga mlx 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:53.427 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:53.427 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.427 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:53.428 Found net devices under 0000:86:00.0: cvl_0_0 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:53.428 Found net devices under 0000:86:00.1: cvl_0_1 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # is_hw=yes 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@247 -- # create_target_ns 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@28 -- # local -g _dev 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # ips=() 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772161 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:53.428 10.0.0.1 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772162 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:53.428 10.0.0.2 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:53.428 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:53.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:53.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.498 ms 00:13:53.429 00:13:53.429 --- 10.0.0.1 ping statistics --- 00:13:53.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.429 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=target0 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:13:53.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:53.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.153 ms 00:13:53.429 00:13:53.429 --- 10.0.0.2 ping statistics --- 00:13:53.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:53.429 rtt min/avg/max/mdev = 0.153/0.153/0.153/0.000 ms 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair++ )) 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # return 0 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=initiator0 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=initiator1 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # return 1 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev= 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@160 -- # return 0 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev target0 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=target0 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:53.429 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # get_net_dev target1 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # local dev=target1 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # return 1 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@159 -- # dev= 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@160 -- # return 0 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:13:53.430 ' 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # nvmfpid=1616380 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # waitforlisten 1616380 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 1616380 ']' 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:53.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:53.430 08:11:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.430 [2024-11-20 08:11:06.839630] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:13:53.430 [2024-11-20 08:11:06.839679] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:53.430 [2024-11-20 08:11:06.918323] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:53.430 [2024-11-20 08:11:06.961247] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:53.430 [2024-11-20 08:11:06.961284] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:53.430 [2024-11-20 08:11:06.961291] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:53.430 [2024-11-20 08:11:06.961298] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:53.430 [2024-11-20 08:11:06.961304] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:53.430 [2024-11-20 08:11:06.962813] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:53.430 [2024-11-20 08:11:06.962923] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:53.430 [2024-11-20 08:11:06.963006] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:53.430 [2024-11-20 08:11:06.963007] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.430 [2024-11-20 08:11:07.099172] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.430 [2024-11-20 08:11:07.112465] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:13:53.430 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.431 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.431 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.431 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:53.431 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:13:53.431 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.431 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.431 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.431 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:53.688 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:53.945 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:53.945 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:13:53.945 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:53.945 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:13:53.945 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:53.945 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:53.945 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:53.945 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:53.945 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:53.945 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:13:53.945 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:13:53.945 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:13:53.945 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:53.945 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:13:53.945 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:53.945 08:11:07 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:54.201 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:13:54.201 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:13:54.202 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:13:54.202 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:54.202 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:54.202 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:54.458 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:54.458 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:13:54.458 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.458 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:54.458 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.458 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:13:54.458 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:13:54.458 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:54.459 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:13:54.459 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.459 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:13:54.459 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:54.459 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.459 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:13:54.459 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:54.459 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:13:54.459 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:54.459 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:54.459 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:54.459 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:54.459 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:54.716 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:13:54.716 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:13:54.716 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:13:54.716 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:13:54.716 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:13:54.716 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:54.716 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:13:54.716 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:13:54.716 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:13:54.716 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:13:54.716 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:13:54.716 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:54.716 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:13:54.973 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:13:54.973 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:13:54.973 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.973 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:54.973 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.973 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:13:54.973 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:13:54.973 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:54.973 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:54.973 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:54.973 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:13:54.973 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:13:54.973 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:13:54.973 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:13:54.973 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:13:54.973 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:13:54.973 08:11:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@99 -- # sync 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # set +e 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:55.231 rmmod nvme_tcp 00:13:55.231 rmmod nvme_fabrics 00:13:55.231 rmmod nvme_keyring 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # set -e 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # return 0 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # '[' -n 1616380 ']' 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@337 -- # killprocess 1616380 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 1616380 ']' 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 1616380 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1616380 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1616380' 00:13:55.231 killing process with pid 1616380 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 1616380 00:13:55.231 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 1616380 00:13:55.490 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:55.490 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # nvmf_fini 00:13:55.490 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@254 -- # local dev 00:13:55.490 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@257 -- # remove_target_ns 00:13:55.490 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:55.490 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:55.490 08:11:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@258 -- # delete_main_bridge 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@121 -- # return 0 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # _dev=0 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # dev_map=() 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@274 -- # iptr 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@548 -- # iptables-save 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@548 -- # iptables-restore 00:13:58.028 00:13:58.028 real 0m11.017s 00:13:58.028 user 0m12.430s 00:13:58.028 sys 0m5.264s 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:13:58.028 ************************************ 00:13:58.028 END TEST nvmf_referrals 00:13:58.028 ************************************ 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:58.028 ************************************ 00:13:58.028 START TEST nvmf_connect_disconnect 00:13:58.028 ************************************ 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:13:58.028 * Looking for test storage... 00:13:58.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:58.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.028 --rc genhtml_branch_coverage=1 00:13:58.028 --rc genhtml_function_coverage=1 00:13:58.028 --rc genhtml_legend=1 00:13:58.028 --rc geninfo_all_blocks=1 00:13:58.028 --rc geninfo_unexecuted_blocks=1 00:13:58.028 00:13:58.028 ' 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:58.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.028 --rc genhtml_branch_coverage=1 00:13:58.028 --rc genhtml_function_coverage=1 00:13:58.028 --rc genhtml_legend=1 00:13:58.028 --rc geninfo_all_blocks=1 00:13:58.028 --rc geninfo_unexecuted_blocks=1 00:13:58.028 00:13:58.028 ' 00:13:58.028 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:58.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.028 --rc genhtml_branch_coverage=1 00:13:58.029 --rc genhtml_function_coverage=1 00:13:58.029 --rc genhtml_legend=1 00:13:58.029 --rc geninfo_all_blocks=1 00:13:58.029 --rc geninfo_unexecuted_blocks=1 00:13:58.029 00:13:58.029 ' 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:58.029 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:58.029 --rc genhtml_branch_coverage=1 00:13:58.029 --rc genhtml_function_coverage=1 00:13:58.029 --rc genhtml_legend=1 00:13:58.029 --rc geninfo_all_blocks=1 00:13:58.029 --rc geninfo_unexecuted_blocks=1 00:13:58.029 00:13:58.029 ' 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@50 -- # : 0 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:58.029 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # remove_target_ns 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # xtrace_disable 00:13:58.029 08:11:11 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@131 -- # pci_devs=() 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@131 -- # local -a pci_devs 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@132 -- # pci_net_devs=() 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@133 -- # pci_drivers=() 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@133 -- # local -A pci_drivers 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@135 -- # net_devs=() 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@135 -- # local -ga net_devs 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@136 -- # e810=() 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@136 -- # local -ga e810 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@137 -- # x722=() 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@137 -- # local -ga x722 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@138 -- # mlx=() 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@138 -- # local -ga mlx 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:04.603 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:04.603 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:04.603 Found net devices under 0000:86:00.0: cvl_0_0 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:04.603 Found net devices under 0000:86:00.1: cvl_0_1 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # is_hw=yes 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@247 -- # create_target_ns 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:14:04.603 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@28 -- # local -g _dev 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772161 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:14:04.604 10.0.0.1 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772162 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:14:04.604 10.0.0.2 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@38 -- # ping_ips 1 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:04.604 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.604 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.451 ms 00:14:04.604 00:14:04.604 --- 10.0.0.1 ping statistics --- 00:14:04.604 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.604 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:04.604 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=target0 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:14:04.605 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.605 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:14:04.605 00:14:04.605 --- 10.0.0.2 ping statistics --- 00:14:04.605 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.605 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # return 0 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # return 1 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev= 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@160 -- # return 0 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=target0 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # local dev=target1 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # return 1 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@159 -- # dev= 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@160 -- # return 0 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:14:04.605 ' 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # nvmfpid=1620482 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # waitforlisten 1620482 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 1620482 ']' 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.605 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:04.606 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.606 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.606 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:04.606 08:11:17 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:04.606 [2024-11-20 08:11:17.952778] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:14:04.606 [2024-11-20 08:11:17.952824] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.606 [2024-11-20 08:11:18.031486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:04.606 [2024-11-20 08:11:18.073461] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.606 [2024-11-20 08:11:18.073498] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.606 [2024-11-20 08:11:18.073505] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.606 [2024-11-20 08:11:18.073511] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.606 [2024-11-20 08:11:18.073516] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.606 [2024-11-20 08:11:18.074956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:04.606 [2024-11-20 08:11:18.075068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:04.606 [2024-11-20 08:11:18.075176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:04.606 [2024-11-20 08:11:18.075177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:04.863 [2024-11-20 08:11:18.834102] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:04.863 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.120 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:05.120 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.120 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.120 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:05.120 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:05.120 [2024-11-20 08:11:18.897500] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.120 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:05.120 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:14:05.120 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:14:05.120 08:11:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:14:08.391 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.360 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@99 -- # sync 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # set +e 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:21.651 rmmod nvme_tcp 00:14:21.651 rmmod nvme_fabrics 00:14:21.651 rmmod nvme_keyring 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # set -e 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # return 0 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # '[' -n 1620482 ']' 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@337 -- # killprocess 1620482 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1620482 ']' 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 1620482 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1620482 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1620482' 00:14:21.651 killing process with pid 1620482 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 1620482 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 1620482 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # nvmf_fini 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@254 -- # local dev 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@257 -- # remove_target_ns 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:21.651 08:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@258 -- # delete_main_bridge 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@121 -- # return 0 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # _dev=0 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # dev_map=() 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@274 -- # iptr 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@548 -- # iptables-save 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@548 -- # iptables-restore 00:14:23.557 00:14:23.557 real 0m25.985s 00:14:23.557 user 1m10.944s 00:14:23.557 sys 0m5.998s 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:23.557 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:14:23.557 ************************************ 00:14:23.557 END TEST nvmf_connect_disconnect 00:14:23.557 ************************************ 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:23.817 ************************************ 00:14:23.817 START TEST nvmf_multitarget 00:14:23.817 ************************************ 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:14:23.817 * Looking for test storage... 00:14:23.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:14:23.817 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:23.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.818 --rc genhtml_branch_coverage=1 00:14:23.818 --rc genhtml_function_coverage=1 00:14:23.818 --rc genhtml_legend=1 00:14:23.818 --rc geninfo_all_blocks=1 00:14:23.818 --rc geninfo_unexecuted_blocks=1 00:14:23.818 00:14:23.818 ' 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:23.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.818 --rc genhtml_branch_coverage=1 00:14:23.818 --rc genhtml_function_coverage=1 00:14:23.818 --rc genhtml_legend=1 00:14:23.818 --rc geninfo_all_blocks=1 00:14:23.818 --rc geninfo_unexecuted_blocks=1 00:14:23.818 00:14:23.818 ' 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:23.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.818 --rc genhtml_branch_coverage=1 00:14:23.818 --rc genhtml_function_coverage=1 00:14:23.818 --rc genhtml_legend=1 00:14:23.818 --rc geninfo_all_blocks=1 00:14:23.818 --rc geninfo_unexecuted_blocks=1 00:14:23.818 00:14:23.818 ' 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:23.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:23.818 --rc genhtml_branch_coverage=1 00:14:23.818 --rc genhtml_function_coverage=1 00:14:23.818 --rc genhtml_legend=1 00:14:23.818 --rc geninfo_all_blocks=1 00:14:23.818 --rc geninfo_unexecuted_blocks=1 00:14:23.818 00:14:23.818 ' 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@50 -- # : 0 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:23.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # remove_target_ns 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # xtrace_disable 00:14:23.818 08:11:37 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@131 -- # pci_devs=() 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@131 -- # local -a pci_devs 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@132 -- # pci_net_devs=() 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@133 -- # pci_drivers=() 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@133 -- # local -A pci_drivers 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@135 -- # net_devs=() 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@135 -- # local -ga net_devs 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@136 -- # e810=() 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@136 -- # local -ga e810 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@137 -- # x722=() 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@137 -- # local -ga x722 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@138 -- # mlx=() 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@138 -- # local -ga mlx 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:30.399 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:30.399 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:30.399 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:30.400 Found net devices under 0000:86:00.0: cvl_0_0 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:30.400 Found net devices under 0000:86:00.1: cvl_0_1 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # is_hw=yes 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@247 -- # create_target_ns 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@28 -- # local -g _dev 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # ips=() 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772161 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:14:30.400 10.0.0.1 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772162 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:14:30.400 10.0.0.2 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@38 -- # ping_ips 1 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:14:30.400 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:30.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:30.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.414 ms 00:14:30.401 00:14:30.401 --- 10.0.0.1 ping statistics --- 00:14:30.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.401 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=target0 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:14:30.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:30.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:14:30.401 00:14:30.401 --- 10.0.0.2 ping statistics --- 00:14:30.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:30.401 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # return 0 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # return 1 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev= 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@160 -- # return 0 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=target0 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:30.401 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # local dev=target1 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # return 1 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@159 -- # dev= 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@160 -- # return 0 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:14:30.402 ' 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # nvmfpid=1627026 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # waitforlisten 1627026 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 1627026 ']' 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:30.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:30.402 08:11:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:30.402 [2024-11-20 08:11:43.992097] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:14:30.402 [2024-11-20 08:11:43.992147] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:30.402 [2024-11-20 08:11:44.070649] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:30.402 [2024-11-20 08:11:44.113405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:30.402 [2024-11-20 08:11:44.113443] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:30.402 [2024-11-20 08:11:44.113450] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:30.402 [2024-11-20 08:11:44.113456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:30.402 [2024-11-20 08:11:44.113461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:30.402 [2024-11-20 08:11:44.115096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.402 [2024-11-20 08:11:44.115224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:30.402 [2024-11-20 08:11:44.115291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:30.402 [2024-11-20 08:11:44.115291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:30.402 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:30.402 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:14:30.402 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:30.402 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:30.402 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:30.402 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:30.402 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:14:30.402 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:30.402 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:14:30.402 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:14:30.402 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:14:30.662 "nvmf_tgt_1" 00:14:30.662 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:14:30.662 "nvmf_tgt_2" 00:14:30.662 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:30.662 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:14:30.662 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:14:30.662 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:14:30.921 true 00:14:30.921 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:14:30.921 true 00:14:30.921 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:14:30.921 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:14:31.181 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:14:31.181 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:14:31.181 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:14:31.181 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:31.181 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@99 -- # sync 00:14:31.181 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:31.181 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # set +e 00:14:31.181 08:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:31.181 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:31.181 rmmod nvme_tcp 00:14:31.181 rmmod nvme_fabrics 00:14:31.181 rmmod nvme_keyring 00:14:31.181 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:31.181 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # set -e 00:14:31.181 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # return 0 00:14:31.181 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # '[' -n 1627026 ']' 00:14:31.181 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@337 -- # killprocess 1627026 00:14:31.181 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 1627026 ']' 00:14:31.181 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 1627026 00:14:31.181 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:14:31.181 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:31.181 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1627026 00:14:31.181 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:31.181 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:31.181 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1627026' 00:14:31.181 killing process with pid 1627026 00:14:31.181 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 1627026 00:14:31.181 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 1627026 00:14:31.440 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:31.440 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # nvmf_fini 00:14:31.440 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@254 -- # local dev 00:14:31.440 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@257 -- # remove_target_ns 00:14:31.440 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:31.440 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:31.440 08:11:45 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@258 -- # delete_main_bridge 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@121 -- # return 0 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # _dev=0 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # dev_map=() 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@274 -- # iptr 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@548 -- # iptables-save 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@548 -- # iptables-restore 00:14:33.347 00:14:33.347 real 0m9.734s 00:14:33.347 user 0m7.276s 00:14:33.347 sys 0m4.929s 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:33.347 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:14:33.347 ************************************ 00:14:33.347 END TEST nvmf_multitarget 00:14:33.347 ************************************ 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:33.608 ************************************ 00:14:33.608 START TEST nvmf_rpc 00:14:33.608 ************************************ 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:14:33.608 * Looking for test storage... 00:14:33.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:33.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.608 --rc genhtml_branch_coverage=1 00:14:33.608 --rc genhtml_function_coverage=1 00:14:33.608 --rc genhtml_legend=1 00:14:33.608 --rc geninfo_all_blocks=1 00:14:33.608 --rc geninfo_unexecuted_blocks=1 00:14:33.608 00:14:33.608 ' 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:33.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.608 --rc genhtml_branch_coverage=1 00:14:33.608 --rc genhtml_function_coverage=1 00:14:33.608 --rc genhtml_legend=1 00:14:33.608 --rc geninfo_all_blocks=1 00:14:33.608 --rc geninfo_unexecuted_blocks=1 00:14:33.608 00:14:33.608 ' 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:33.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.608 --rc genhtml_branch_coverage=1 00:14:33.608 --rc genhtml_function_coverage=1 00:14:33.608 --rc genhtml_legend=1 00:14:33.608 --rc geninfo_all_blocks=1 00:14:33.608 --rc geninfo_unexecuted_blocks=1 00:14:33.608 00:14:33.608 ' 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:33.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:33.608 --rc genhtml_branch_coverage=1 00:14:33.608 --rc genhtml_function_coverage=1 00:14:33.608 --rc genhtml_legend=1 00:14:33.608 --rc geninfo_all_blocks=1 00:14:33.608 --rc geninfo_unexecuted_blocks=1 00:14:33.608 00:14:33.608 ' 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.608 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.609 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.609 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:14:33.609 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.609 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:33.609 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:33.609 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:33.609 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:33.609 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@50 -- # : 0 00:14:33.609 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:33.609 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:33.609 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:33.609 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.609 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.609 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:33.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:33.609 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:33.609 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:33.609 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:33.868 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:14:33.868 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:14:33.868 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:33.868 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.868 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:33.868 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:33.868 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # remove_target_ns 00:14:33.868 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:33.868 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:33.868 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:33.868 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:14:33.868 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:14:33.868 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # xtrace_disable 00:14:33.868 08:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@131 -- # pci_devs=() 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@131 -- # local -a pci_devs 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@132 -- # pci_net_devs=() 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@133 -- # pci_drivers=() 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@133 -- # local -A pci_drivers 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@135 -- # net_devs=() 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@135 -- # local -ga net_devs 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@136 -- # e810=() 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@136 -- # local -ga e810 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@137 -- # x722=() 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@137 -- # local -ga x722 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@138 -- # mlx=() 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@138 -- # local -ga mlx 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:40.442 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:40.443 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:40.443 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:40.443 Found net devices under 0000:86:00.0: cvl_0_0 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:40.443 Found net devices under 0000:86:00.1: cvl_0_1 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # is_hw=yes 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@247 -- # create_target_ns 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@28 -- # local -g _dev 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # ips=() 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772161 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:14:40.443 10.0.0.1 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772162 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:14:40.443 10.0.0.2 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:14:40.443 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@38 -- # ping_ips 1 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:40.444 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:40.444 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:14:40.444 00:14:40.444 --- 10.0.0.1 ping statistics --- 00:14:40.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.444 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=target0 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:14:40.444 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:40.444 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:14:40.444 00:14:40.444 --- 10.0.0.2 ping statistics --- 00:14:40.444 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:40.444 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # return 0 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # return 1 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev= 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@160 -- # return 0 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=target0 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:14:40.444 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # local dev=target1 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # return 1 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@159 -- # dev= 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@160 -- # return 0 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:14:40.445 ' 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # nvmfpid=1630741 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # waitforlisten 1630741 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 1630741 ']' 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.445 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:40.445 08:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.445 [2024-11-20 08:11:53.819073] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:14:40.445 [2024-11-20 08:11:53.819118] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:40.445 [2024-11-20 08:11:53.899991] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:40.445 [2024-11-20 08:11:53.943303] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:40.445 [2024-11-20 08:11:53.943338] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:40.445 [2024-11-20 08:11:53.943345] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:40.445 [2024-11-20 08:11:53.943351] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:40.445 [2024-11-20 08:11:53.943356] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:40.445 [2024-11-20 08:11:53.944799] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:40.445 [2024-11-20 08:11:53.944904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:40.445 [2024-11-20 08:11:53.944920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:40.445 [2024-11-20 08:11:53.944925] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:40.705 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:40.705 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:40.705 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:40.705 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:40.705 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.705 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:40.705 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:14:40.705 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.705 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.705 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.705 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:14:40.705 "tick_rate": 2100000000, 00:14:40.705 "poll_groups": [ 00:14:40.705 { 00:14:40.705 "name": "nvmf_tgt_poll_group_000", 00:14:40.705 "admin_qpairs": 0, 00:14:40.705 "io_qpairs": 0, 00:14:40.705 "current_admin_qpairs": 0, 00:14:40.705 "current_io_qpairs": 0, 00:14:40.705 "pending_bdev_io": 0, 00:14:40.705 "completed_nvme_io": 0, 00:14:40.705 "transports": [] 00:14:40.705 }, 00:14:40.705 { 00:14:40.705 "name": "nvmf_tgt_poll_group_001", 00:14:40.705 "admin_qpairs": 0, 00:14:40.705 "io_qpairs": 0, 00:14:40.705 "current_admin_qpairs": 0, 00:14:40.705 "current_io_qpairs": 0, 00:14:40.705 "pending_bdev_io": 0, 00:14:40.705 "completed_nvme_io": 0, 00:14:40.705 "transports": [] 00:14:40.705 }, 00:14:40.705 { 00:14:40.705 "name": "nvmf_tgt_poll_group_002", 00:14:40.705 "admin_qpairs": 0, 00:14:40.705 "io_qpairs": 0, 00:14:40.705 "current_admin_qpairs": 0, 00:14:40.705 "current_io_qpairs": 0, 00:14:40.705 "pending_bdev_io": 0, 00:14:40.705 "completed_nvme_io": 0, 00:14:40.705 "transports": [] 00:14:40.705 }, 00:14:40.705 { 00:14:40.705 "name": "nvmf_tgt_poll_group_003", 00:14:40.705 "admin_qpairs": 0, 00:14:40.705 "io_qpairs": 0, 00:14:40.705 "current_admin_qpairs": 0, 00:14:40.705 "current_io_qpairs": 0, 00:14:40.705 "pending_bdev_io": 0, 00:14:40.705 "completed_nvme_io": 0, 00:14:40.705 "transports": [] 00:14:40.705 } 00:14:40.705 ] 00:14:40.705 }' 00:14:40.705 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:14:40.705 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:14:40.705 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:14:40.705 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:14:40.964 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:14:40.964 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:14:40.964 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:14:40.964 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:40.964 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.964 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.964 [2024-11-20 08:11:54.799694] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:40.964 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.964 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:14:40.964 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.964 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.964 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.964 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:14:40.964 "tick_rate": 2100000000, 00:14:40.964 "poll_groups": [ 00:14:40.964 { 00:14:40.964 "name": "nvmf_tgt_poll_group_000", 00:14:40.964 "admin_qpairs": 0, 00:14:40.964 "io_qpairs": 0, 00:14:40.964 "current_admin_qpairs": 0, 00:14:40.964 "current_io_qpairs": 0, 00:14:40.964 "pending_bdev_io": 0, 00:14:40.964 "completed_nvme_io": 0, 00:14:40.964 "transports": [ 00:14:40.964 { 00:14:40.964 "trtype": "TCP" 00:14:40.964 } 00:14:40.964 ] 00:14:40.964 }, 00:14:40.964 { 00:14:40.964 "name": "nvmf_tgt_poll_group_001", 00:14:40.964 "admin_qpairs": 0, 00:14:40.964 "io_qpairs": 0, 00:14:40.964 "current_admin_qpairs": 0, 00:14:40.964 "current_io_qpairs": 0, 00:14:40.964 "pending_bdev_io": 0, 00:14:40.964 "completed_nvme_io": 0, 00:14:40.964 "transports": [ 00:14:40.964 { 00:14:40.964 "trtype": "TCP" 00:14:40.964 } 00:14:40.964 ] 00:14:40.964 }, 00:14:40.964 { 00:14:40.964 "name": "nvmf_tgt_poll_group_002", 00:14:40.964 "admin_qpairs": 0, 00:14:40.964 "io_qpairs": 0, 00:14:40.964 "current_admin_qpairs": 0, 00:14:40.964 "current_io_qpairs": 0, 00:14:40.964 "pending_bdev_io": 0, 00:14:40.964 "completed_nvme_io": 0, 00:14:40.964 "transports": [ 00:14:40.964 { 00:14:40.964 "trtype": "TCP" 00:14:40.964 } 00:14:40.964 ] 00:14:40.964 }, 00:14:40.964 { 00:14:40.964 "name": "nvmf_tgt_poll_group_003", 00:14:40.964 "admin_qpairs": 0, 00:14:40.964 "io_qpairs": 0, 00:14:40.964 "current_admin_qpairs": 0, 00:14:40.964 "current_io_qpairs": 0, 00:14:40.964 "pending_bdev_io": 0, 00:14:40.964 "completed_nvme_io": 0, 00:14:40.964 "transports": [ 00:14:40.964 { 00:14:40.964 "trtype": "TCP" 00:14:40.964 } 00:14:40.964 ] 00:14:40.964 } 00:14:40.964 ] 00:14:40.965 }' 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.965 Malloc1 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:40.965 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.965 [2024-11-20 08:11:54.986967] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.240 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.240 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:41.240 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:41.240 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:41.240 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:41.240 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:41.240 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:41.240 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:41.240 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:41.240 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:41.240 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:41.240 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:41.240 08:11:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:14:41.240 [2024-11-20 08:11:55.015664] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:14:41.240 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:41.240 could not add new controller: failed to write to nvme-fabrics device 00:14:41.240 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:41.240 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:41.240 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:41.240 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:41.240 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:41.240 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.240 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.240 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.240 08:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:42.618 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:14:42.618 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:42.618 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:42.618 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:42.618 08:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:44.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:44.527 [2024-11-20 08:11:58.401790] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:14:44.527 Failed to write to /dev/nvme-fabrics: Input/output error 00:14:44.527 could not add new controller: failed to write to nvme-fabrics device 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:44.527 08:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:45.904 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:14:45.904 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:45.904 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:45.904 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:45.904 08:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:47.809 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:47.809 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:47.809 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:47.809 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:47.810 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.810 [2024-11-20 08:12:01.732355] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.810 08:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:49.189 08:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:49.189 08:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:49.189 08:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:49.189 08:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:49.189 08:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:51.094 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:51.094 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:51.094 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:51.094 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:51.094 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:51.094 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:51.094 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:51.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:51.094 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:51.094 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:51.094 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:51.094 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.095 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:51.095 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:51.095 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:51.095 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:51.095 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.095 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.095 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.095 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.095 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.095 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.095 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.095 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:51.095 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:51.095 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.095 08:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.095 08:12:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.095 08:12:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:51.095 08:12:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.095 08:12:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.095 [2024-11-20 08:12:05.008322] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:51.095 08:12:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.095 08:12:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:51.095 08:12:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.095 08:12:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.095 08:12:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.095 08:12:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:51.095 08:12:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.095 08:12:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:51.095 08:12:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:51.095 08:12:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:52.473 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:52.473 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:52.473 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:52.473 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:52.473 08:12:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:54.381 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:54.381 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:54.381 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:54.381 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:54.381 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:54.381 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:54.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.382 [2024-11-20 08:12:08.317501] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.382 08:12:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:55.760 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:55.760 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:55.760 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:55.760 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:55.760 08:12:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:57.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:57.666 [2024-11-20 08:12:11.581907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.666 08:12:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:59.041 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:14:59.041 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:14:59.041 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:59.041 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:14:59.041 08:12:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:00.944 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.944 [2024-11-20 08:12:14.873857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.944 08:12:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:15:02.323 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:15:02.323 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:15:02.323 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:02.323 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:02.323 08:12:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:15:04.227 08:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:15:04.227 08:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:15:04.227 08:12:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:04.227 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.227 [2024-11-20 08:12:18.195190] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.227 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.228 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.228 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:04.228 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.228 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.228 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.228 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:04.228 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:04.228 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.228 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.228 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.228 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.228 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.228 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.228 [2024-11-20 08:12:18.243213] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.228 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.228 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:04.228 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.228 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.486 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 [2024-11-20 08:12:18.291358] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 [2024-11-20 08:12:18.339513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 [2024-11-20 08:12:18.387668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:04.487 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.488 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.488 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.488 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:15:04.488 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.488 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:04.488 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.488 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:15:04.488 "tick_rate": 2100000000, 00:15:04.488 "poll_groups": [ 00:15:04.488 { 00:15:04.488 "name": "nvmf_tgt_poll_group_000", 00:15:04.488 "admin_qpairs": 2, 00:15:04.488 "io_qpairs": 168, 00:15:04.488 "current_admin_qpairs": 0, 00:15:04.488 "current_io_qpairs": 0, 00:15:04.488 "pending_bdev_io": 0, 00:15:04.488 "completed_nvme_io": 318, 00:15:04.488 "transports": [ 00:15:04.488 { 00:15:04.488 "trtype": "TCP" 00:15:04.488 } 00:15:04.488 ] 00:15:04.488 }, 00:15:04.488 { 00:15:04.488 "name": "nvmf_tgt_poll_group_001", 00:15:04.488 "admin_qpairs": 2, 00:15:04.488 "io_qpairs": 168, 00:15:04.488 "current_admin_qpairs": 0, 00:15:04.488 "current_io_qpairs": 0, 00:15:04.488 "pending_bdev_io": 0, 00:15:04.488 "completed_nvme_io": 211, 00:15:04.488 "transports": [ 00:15:04.488 { 00:15:04.488 "trtype": "TCP" 00:15:04.488 } 00:15:04.488 ] 00:15:04.488 }, 00:15:04.488 { 00:15:04.488 "name": "nvmf_tgt_poll_group_002", 00:15:04.488 "admin_qpairs": 1, 00:15:04.488 "io_qpairs": 168, 00:15:04.488 "current_admin_qpairs": 0, 00:15:04.488 "current_io_qpairs": 0, 00:15:04.488 "pending_bdev_io": 0, 00:15:04.488 "completed_nvme_io": 255, 00:15:04.488 "transports": [ 00:15:04.488 { 00:15:04.488 "trtype": "TCP" 00:15:04.488 } 00:15:04.488 ] 00:15:04.488 }, 00:15:04.488 { 00:15:04.488 "name": "nvmf_tgt_poll_group_003", 00:15:04.488 "admin_qpairs": 2, 00:15:04.488 "io_qpairs": 168, 00:15:04.488 "current_admin_qpairs": 0, 00:15:04.488 "current_io_qpairs": 0, 00:15:04.488 "pending_bdev_io": 0, 00:15:04.488 "completed_nvme_io": 238, 00:15:04.488 "transports": [ 00:15:04.488 { 00:15:04.488 "trtype": "TCP" 00:15:04.488 } 00:15:04.488 ] 00:15:04.488 } 00:15:04.488 ] 00:15:04.488 }' 00:15:04.488 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:15:04.488 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:15:04.488 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:15:04.488 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:04.488 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:15:04.488 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:15:04.488 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:15:04.488 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:15:04.488 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:15:04.747 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:15:04.747 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:15:04.747 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:15:04.747 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:15:04.747 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # nvmfcleanup 00:15:04.747 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@99 -- # sync 00:15:04.747 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:15:04.747 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # set +e 00:15:04.747 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # for i in {1..20} 00:15:04.747 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:15:04.747 rmmod nvme_tcp 00:15:04.748 rmmod nvme_fabrics 00:15:04.748 rmmod nvme_keyring 00:15:04.748 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:15:04.748 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # set -e 00:15:04.748 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # return 0 00:15:04.748 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # '[' -n 1630741 ']' 00:15:04.748 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@337 -- # killprocess 1630741 00:15:04.748 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 1630741 ']' 00:15:04.748 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 1630741 00:15:04.748 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:15:04.748 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:04.748 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1630741 00:15:04.748 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:04.748 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:04.748 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1630741' 00:15:04.748 killing process with pid 1630741 00:15:04.748 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 1630741 00:15:04.748 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 1630741 00:15:05.018 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:15:05.018 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # nvmf_fini 00:15:05.018 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@254 -- # local dev 00:15:05.018 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@257 -- # remove_target_ns 00:15:05.018 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:05.018 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:05.018 08:12:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:06.922 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@258 -- # delete_main_bridge 00:15:06.922 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:06.922 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@121 -- # return 0 00:15:06.922 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:06.922 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:15:06.922 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:06.922 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:15:06.922 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:15:06.922 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:06.922 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:15:06.922 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:15:06.922 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:06.922 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:15:06.922 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:06.922 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:15:06.922 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:15:06.923 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:06.923 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:15:06.923 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:15:06.923 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:15:06.923 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # _dev=0 00:15:06.923 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # dev_map=() 00:15:06.923 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@274 -- # iptr 00:15:06.923 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@548 -- # iptables-save 00:15:06.923 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:15:06.923 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@548 -- # iptables-restore 00:15:06.923 00:15:06.923 real 0m33.498s 00:15:06.923 user 1m41.212s 00:15:06.923 sys 0m6.586s 00:15:06.923 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:06.923 08:12:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:15:06.923 ************************************ 00:15:06.923 END TEST nvmf_rpc 00:15:06.923 ************************************ 00:15:07.182 08:12:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:07.182 08:12:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:07.182 08:12:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:07.182 08:12:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:07.182 ************************************ 00:15:07.182 START TEST nvmf_invalid 00:15:07.182 ************************************ 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:15:07.182 * Looking for test storage... 00:15:07.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:07.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.182 --rc genhtml_branch_coverage=1 00:15:07.182 --rc genhtml_function_coverage=1 00:15:07.182 --rc genhtml_legend=1 00:15:07.182 --rc geninfo_all_blocks=1 00:15:07.182 --rc geninfo_unexecuted_blocks=1 00:15:07.182 00:15:07.182 ' 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:07.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.182 --rc genhtml_branch_coverage=1 00:15:07.182 --rc genhtml_function_coverage=1 00:15:07.182 --rc genhtml_legend=1 00:15:07.182 --rc geninfo_all_blocks=1 00:15:07.182 --rc geninfo_unexecuted_blocks=1 00:15:07.182 00:15:07.182 ' 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:07.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.182 --rc genhtml_branch_coverage=1 00:15:07.182 --rc genhtml_function_coverage=1 00:15:07.182 --rc genhtml_legend=1 00:15:07.182 --rc geninfo_all_blocks=1 00:15:07.182 --rc geninfo_unexecuted_blocks=1 00:15:07.182 00:15:07.182 ' 00:15:07.182 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:07.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:07.182 --rc genhtml_branch_coverage=1 00:15:07.182 --rc genhtml_function_coverage=1 00:15:07.182 --rc genhtml_legend=1 00:15:07.183 --rc geninfo_all_blocks=1 00:15:07.183 --rc geninfo_unexecuted_blocks=1 00:15:07.183 00:15:07.183 ' 00:15:07.183 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:07.183 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:15:07.183 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:07.183 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:07.183 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:07.183 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:07.183 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:07.183 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:07.183 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:07.183 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@50 -- # : 0 00:15:07.442 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:07.443 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # prepare_net_devs 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # remove_target_ns 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # xtrace_disable 00:15:07.443 08:12:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@131 -- # pci_devs=() 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@131 -- # local -a pci_devs 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@132 -- # pci_net_devs=() 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@133 -- # pci_drivers=() 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@133 -- # local -A pci_drivers 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@135 -- # net_devs=() 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@135 -- # local -ga net_devs 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@136 -- # e810=() 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@136 -- # local -ga e810 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@137 -- # x722=() 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@137 -- # local -ga x722 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@138 -- # mlx=() 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@138 -- # local -ga mlx 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:14.138 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.138 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:14.139 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:14.139 Found net devices under 0000:86:00.0: cvl_0_0 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:14.139 Found net devices under 0000:86:00.1: cvl_0_1 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # is_hw=yes 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@247 -- # create_target_ns 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@28 -- # local -g _dev 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # ips=() 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:15:14.139 08:12:26 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772161 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:15:14.139 10.0.0.1 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772162 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:15:14.139 10.0.0.2 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:14.139 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@38 -- # ping_ips 1 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:15:14.140 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:14.140 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.462 ms 00:15:14.140 00:15:14.140 --- 10.0.0.1 ping statistics --- 00:15:14.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.140 rtt min/avg/max/mdev = 0.462/0.462/0.462/0.000 ms 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=target0 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:15:14.140 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:14.140 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:15:14.140 00:15:14.140 --- 10.0.0.2 ping statistics --- 00:15:14.140 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:14.140 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair++ )) 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # return 0 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=initiator1 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # return 1 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev= 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@160 -- # return 0 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:14.140 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=target0 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # get_net_dev target1 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # local dev=target1 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # return 1 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@159 -- # dev= 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@160 -- # return 0 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:15:14.141 ' 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # nvmfpid=1639092 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # waitforlisten 1639092 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 1639092 ']' 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:14.141 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:14.141 [2024-11-20 08:12:27.389668] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:15:14.141 [2024-11-20 08:12:27.389714] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:14.141 [2024-11-20 08:12:27.466105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:14.141 [2024-11-20 08:12:27.507657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:14.141 [2024-11-20 08:12:27.507694] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:14.141 [2024-11-20 08:12:27.507701] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:14.141 [2024-11-20 08:12:27.507707] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:14.141 [2024-11-20 08:12:27.507712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:14.141 [2024-11-20 08:12:27.509096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:14.141 [2024-11-20 08:12:27.509224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:14.141 [2024-11-20 08:12:27.509290] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:14.141 [2024-11-20 08:12:27.509291] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode7389 00:15:14.141 [2024-11-20 08:12:27.821788] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:15:14.141 { 00:15:14.141 "nqn": "nqn.2016-06.io.spdk:cnode7389", 00:15:14.141 "tgt_name": "foobar", 00:15:14.141 "method": "nvmf_create_subsystem", 00:15:14.141 "req_id": 1 00:15:14.141 } 00:15:14.141 Got JSON-RPC error response 00:15:14.141 response: 00:15:14.141 { 00:15:14.141 "code": -32603, 00:15:14.141 "message": "Unable to find target foobar" 00:15:14.141 }' 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:15:14.141 { 00:15:14.141 "nqn": "nqn.2016-06.io.spdk:cnode7389", 00:15:14.141 "tgt_name": "foobar", 00:15:14.141 "method": "nvmf_create_subsystem", 00:15:14.141 "req_id": 1 00:15:14.141 } 00:15:14.141 Got JSON-RPC error response 00:15:14.141 response: 00:15:14.141 { 00:15:14.141 "code": -32603, 00:15:14.141 "message": "Unable to find target foobar" 00:15:14.141 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:15:14.141 08:12:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode27371 00:15:14.141 [2024-11-20 08:12:28.042506] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27371: invalid serial number 'SPDKISFASTANDAWESOME' 00:15:14.141 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:15:14.141 { 00:15:14.141 "nqn": "nqn.2016-06.io.spdk:cnode27371", 00:15:14.141 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:14.141 "method": "nvmf_create_subsystem", 00:15:14.141 "req_id": 1 00:15:14.141 } 00:15:14.141 Got JSON-RPC error response 00:15:14.141 response: 00:15:14.141 { 00:15:14.141 "code": -32602, 00:15:14.141 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:14.141 }' 00:15:14.141 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:15:14.141 { 00:15:14.141 "nqn": "nqn.2016-06.io.spdk:cnode27371", 00:15:14.141 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:15:14.141 "method": "nvmf_create_subsystem", 00:15:14.141 "req_id": 1 00:15:14.141 } 00:15:14.141 Got JSON-RPC error response 00:15:14.141 response: 00:15:14.141 { 00:15:14.141 "code": -32602, 00:15:14.141 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:15:14.141 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:14.141 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:15:14.141 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode27842 00:15:14.401 [2024-11-20 08:12:28.235169] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27842: invalid model number 'SPDK_Controller' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:15:14.401 { 00:15:14.401 "nqn": "nqn.2016-06.io.spdk:cnode27842", 00:15:14.401 "model_number": "SPDK_Controller\u001f", 00:15:14.401 "method": "nvmf_create_subsystem", 00:15:14.401 "req_id": 1 00:15:14.401 } 00:15:14.401 Got JSON-RPC error response 00:15:14.401 response: 00:15:14.401 { 00:15:14.401 "code": -32602, 00:15:14.401 "message": "Invalid MN SPDK_Controller\u001f" 00:15:14.401 }' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:15:14.401 { 00:15:14.401 "nqn": "nqn.2016-06.io.spdk:cnode27842", 00:15:14.401 "model_number": "SPDK_Controller\u001f", 00:15:14.401 "method": "nvmf_create_subsystem", 00:15:14.401 "req_id": 1 00:15:14.401 } 00:15:14.401 Got JSON-RPC error response 00:15:14.401 response: 00:15:14.401 { 00:15:14.401 "code": -32602, 00:15:14.401 "message": "Invalid MN SPDK_Controller\u001f" 00:15:14.401 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.401 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ ) == \- ]] 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo ') /@ccZ>0=jVM2w>qs)/!' 00:15:14.402 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s ') /@ccZ>0=jVM2w>qs)/!' nqn.2016-06.io.spdk:cnode26619 00:15:14.660 [2024-11-20 08:12:28.576349] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26619: invalid serial number ') /@ccZ>0=jVM2w>qs)/!' 00:15:14.660 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:15:14.660 { 00:15:14.660 "nqn": "nqn.2016-06.io.spdk:cnode26619", 00:15:14.660 "serial_number": ") /@ccZ>0=jVM2w>qs)/!", 00:15:14.660 "method": "nvmf_create_subsystem", 00:15:14.660 "req_id": 1 00:15:14.660 } 00:15:14.660 Got JSON-RPC error response 00:15:14.660 response: 00:15:14.660 { 00:15:14.660 "code": -32602, 00:15:14.660 "message": "Invalid SN ) /@ccZ>0=jVM2w>qs)/!" 00:15:14.660 }' 00:15:14.660 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:15:14.660 { 00:15:14.660 "nqn": "nqn.2016-06.io.spdk:cnode26619", 00:15:14.660 "serial_number": ") /@ccZ>0=jVM2w>qs)/!", 00:15:14.660 "method": "nvmf_create_subsystem", 00:15:14.660 "req_id": 1 00:15:14.660 } 00:15:14.660 Got JSON-RPC error response 00:15:14.660 response: 00:15:14.660 { 00:15:14.660 "code": -32602, 00:15:14.660 "message": "Invalid SN ) /@ccZ>0=jVM2w>qs)/!" 00:15:14.660 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:15:14.660 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:15:14.660 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:15:14.660 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:15:14.660 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:15:14.660 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:15:14.660 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:15:14.660 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.660 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:15:14.660 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:15:14.660 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:15:14.660 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:15:14.661 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 54 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x36' 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=6 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.920 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ T == \- ]] 00:15:14.921 08:12:28 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'TNG1kG1kG1kG1kG1kG1kG1k /dev/null' 00:15:17.240 08:12:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:19.145 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@258 -- # delete_main_bridge 00:15:19.145 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:19.145 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@121 -- # return 0 00:15:19.145 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:19.145 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:15:19.145 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:19.145 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:15:19.145 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:15:19.145 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:19.145 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:15:19.145 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # _dev=0 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # dev_map=() 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@274 -- # iptr 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@548 -- # iptables-save 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@548 -- # iptables-restore 00:15:19.405 00:15:19.405 real 0m12.180s 00:15:19.405 user 0m18.683s 00:15:19.405 sys 0m5.508s 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:15:19.405 ************************************ 00:15:19.405 END TEST nvmf_invalid 00:15:19.405 ************************************ 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:19.405 ************************************ 00:15:19.405 START TEST nvmf_connect_stress 00:15:19.405 ************************************ 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:15:19.405 * Looking for test storage... 00:15:19.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:19.405 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:19.406 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:15:19.406 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:19.665 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:19.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.665 --rc genhtml_branch_coverage=1 00:15:19.666 --rc genhtml_function_coverage=1 00:15:19.666 --rc genhtml_legend=1 00:15:19.666 --rc geninfo_all_blocks=1 00:15:19.666 --rc geninfo_unexecuted_blocks=1 00:15:19.666 00:15:19.666 ' 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:19.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.666 --rc genhtml_branch_coverage=1 00:15:19.666 --rc genhtml_function_coverage=1 00:15:19.666 --rc genhtml_legend=1 00:15:19.666 --rc geninfo_all_blocks=1 00:15:19.666 --rc geninfo_unexecuted_blocks=1 00:15:19.666 00:15:19.666 ' 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:19.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.666 --rc genhtml_branch_coverage=1 00:15:19.666 --rc genhtml_function_coverage=1 00:15:19.666 --rc genhtml_legend=1 00:15:19.666 --rc geninfo_all_blocks=1 00:15:19.666 --rc geninfo_unexecuted_blocks=1 00:15:19.666 00:15:19.666 ' 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:19.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:19.666 --rc genhtml_branch_coverage=1 00:15:19.666 --rc genhtml_function_coverage=1 00:15:19.666 --rc genhtml_legend=1 00:15:19.666 --rc geninfo_all_blocks=1 00:15:19.666 --rc geninfo_unexecuted_blocks=1 00:15:19.666 00:15:19.666 ' 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@50 -- # : 0 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:19.666 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:15:19.666 08:12:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@135 -- # net_devs=() 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@136 -- # e810=() 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@136 -- # local -ga e810 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@137 -- # x722=() 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@137 -- # local -ga x722 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@138 -- # mlx=() 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:26.231 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:26.232 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:26.232 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:26.232 Found net devices under 0000:86:00.0: cvl_0_0 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:26.232 Found net devices under 0000:86:00.1: cvl_0_1 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@247 -- # create_target_ns 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # ips=() 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:15:26.232 10.0.0.1 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:15:26.232 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:15:26.233 10.0.0.2 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:15:26.233 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:26.233 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.389 ms 00:15:26.233 00:15:26.233 --- 10.0.0.1 ping statistics --- 00:15:26.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.233 rtt min/avg/max/mdev = 0.389/0.389/0.389/0.000 ms 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:15:26.233 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:26.233 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:15:26.233 00:15:26.233 --- 10.0.0.2 ping statistics --- 00:15:26.233 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:26.233 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # return 0 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:26.233 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # return 1 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev= 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@160 -- # return 0 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # return 1 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@159 -- # dev= 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@160 -- # return 0 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:15:26.234 ' 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # nvmfpid=1643497 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # waitforlisten 1643497 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 1643497 ']' 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:26.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:26.234 08:12:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.234 [2024-11-20 08:12:39.626492] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:15:26.234 [2024-11-20 08:12:39.626545] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:26.234 [2024-11-20 08:12:39.707100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:26.234 [2024-11-20 08:12:39.751684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:26.234 [2024-11-20 08:12:39.751714] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:26.234 [2024-11-20 08:12:39.751722] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:26.234 [2024-11-20 08:12:39.751728] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:26.234 [2024-11-20 08:12:39.751733] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:26.234 [2024-11-20 08:12:39.753101] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:15:26.234 [2024-11-20 08:12:39.753188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:15:26.234 [2024-11-20 08:12:39.753187] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:26.491 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:26.491 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:15:26.491 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:15:26.491 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:26.491 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.491 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:26.491 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:26.491 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.491 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.491 [2024-11-20 08:12:40.510901] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:26.748 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.749 [2024-11-20 08:12:40.531120] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:26.749 NULL1 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=1643638 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.749 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.006 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.006 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:27.006 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.006 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.006 08:12:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.262 08:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.262 08:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:27.262 08:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.262 08:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.262 08:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:27.824 08:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.824 08:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:27.824 08:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:27.824 08:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.824 08:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.081 08:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.081 08:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:28.081 08:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.081 08:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.081 08:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.338 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.338 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:28.338 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.338 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.338 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:28.595 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.595 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:28.595 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:28.595 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.595 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.157 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.158 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:29.158 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.158 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.158 08:12:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.414 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.414 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:29.414 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.414 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.414 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.671 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.671 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:29.671 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.671 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.671 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:29.927 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.927 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:29.927 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:29.927 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.927 08:12:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.184 08:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.184 08:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:30.184 08:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.441 08:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.441 08:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.698 08:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.698 08:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:30.698 08:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.698 08:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.698 08:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:30.955 08:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.955 08:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:30.955 08:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:30.955 08:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.955 08:12:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.212 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.212 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:31.212 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.212 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.212 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:31.775 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.775 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:31.775 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:31.775 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.775 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.032 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.032 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:32.032 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.032 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.032 08:12:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.288 08:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.289 08:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:32.289 08:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.289 08:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.289 08:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.545 08:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.545 08:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:32.545 08:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.545 08:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.545 08:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:32.802 08:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.802 08:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:32.802 08:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:32.802 08:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.802 08:12:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.407 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.407 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:33.407 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.407 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.407 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.664 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.664 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:33.664 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.664 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.664 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:33.920 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.920 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:33.920 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:33.920 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.920 08:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.177 08:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.177 08:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:34.177 08:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.177 08:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.177 08:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.433 08:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.433 08:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:34.433 08:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.433 08:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.433 08:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:34.996 08:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.996 08:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:34.996 08:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:34.996 08:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.996 08:12:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.252 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.252 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:35.252 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.252 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.252 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.508 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.508 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:35.508 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.508 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.508 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:35.764 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.764 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:35.764 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:35.764 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.764 08:12:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.329 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.329 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:36.330 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.330 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.330 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.587 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.587 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:36.587 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:15:36.587 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.587 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:36.844 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 1643638 00:15:36.844 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (1643638) - No such process 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 1643638 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@99 -- # sync 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # set +e 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:15:36.844 rmmod nvme_tcp 00:15:36.844 rmmod nvme_fabrics 00:15:36.844 rmmod nvme_keyring 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # set -e 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # return 0 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # '[' -n 1643497 ']' 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@337 -- # killprocess 1643497 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 1643497 ']' 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 1643497 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:15:36.844 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.845 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1643497 00:15:36.845 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:36.845 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:36.845 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1643497' 00:15:36.845 killing process with pid 1643497 00:15:36.845 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 1643497 00:15:36.845 08:12:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 1643497 00:15:37.103 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:15:37.103 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:15:37.103 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@254 -- # local dev 00:15:37.103 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@257 -- # remove_target_ns 00:15:37.103 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:37.103 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:37.103 08:12:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@258 -- # delete_main_bridge 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@121 -- # return 0 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # _dev=0 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@274 -- # iptr 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@548 -- # iptables-save 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@548 -- # iptables-restore 00:15:39.638 00:15:39.638 real 0m19.821s 00:15:39.638 user 0m41.557s 00:15:39.638 sys 0m8.591s 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:15:39.638 ************************************ 00:15:39.638 END TEST nvmf_connect_stress 00:15:39.638 ************************************ 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:39.638 ************************************ 00:15:39.638 START TEST nvmf_fused_ordering 00:15:39.638 ************************************ 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:15:39.638 * Looking for test storage... 00:15:39.638 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:39.638 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:39.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.639 --rc genhtml_branch_coverage=1 00:15:39.639 --rc genhtml_function_coverage=1 00:15:39.639 --rc genhtml_legend=1 00:15:39.639 --rc geninfo_all_blocks=1 00:15:39.639 --rc geninfo_unexecuted_blocks=1 00:15:39.639 00:15:39.639 ' 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:39.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.639 --rc genhtml_branch_coverage=1 00:15:39.639 --rc genhtml_function_coverage=1 00:15:39.639 --rc genhtml_legend=1 00:15:39.639 --rc geninfo_all_blocks=1 00:15:39.639 --rc geninfo_unexecuted_blocks=1 00:15:39.639 00:15:39.639 ' 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:39.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.639 --rc genhtml_branch_coverage=1 00:15:39.639 --rc genhtml_function_coverage=1 00:15:39.639 --rc genhtml_legend=1 00:15:39.639 --rc geninfo_all_blocks=1 00:15:39.639 --rc geninfo_unexecuted_blocks=1 00:15:39.639 00:15:39.639 ' 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:39.639 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:39.639 --rc genhtml_branch_coverage=1 00:15:39.639 --rc genhtml_function_coverage=1 00:15:39.639 --rc genhtml_legend=1 00:15:39.639 --rc geninfo_all_blocks=1 00:15:39.639 --rc geninfo_unexecuted_blocks=1 00:15:39.639 00:15:39.639 ' 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@50 -- # : 0 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:39.639 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # prepare_net_devs 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # local -g is_hw=no 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # remove_target_ns 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # xtrace_disable 00:15:39.639 08:12:53 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:46.213 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:46.213 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@131 -- # pci_devs=() 00:15:46.213 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@131 -- # local -a pci_devs 00:15:46.213 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@132 -- # pci_net_devs=() 00:15:46.213 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:15:46.213 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@133 -- # pci_drivers=() 00:15:46.213 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@133 -- # local -A pci_drivers 00:15:46.213 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@135 -- # net_devs=() 00:15:46.213 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@135 -- # local -ga net_devs 00:15:46.213 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@136 -- # e810=() 00:15:46.213 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@136 -- # local -ga e810 00:15:46.213 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@137 -- # x722=() 00:15:46.213 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@137 -- # local -ga x722 00:15:46.213 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@138 -- # mlx=() 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@138 -- # local -ga mlx 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:46.214 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:46.214 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:46.214 Found net devices under 0000:86:00.0: cvl_0_0 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:46.214 Found net devices under 0000:86:00.1: cvl_0_1 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # is_hw=yes 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@247 -- # create_target_ns 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@27 -- # local -gA dev_map 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@28 -- # local -g _dev 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # ips=() 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:15:46.214 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772161 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:15:46.215 10.0.0.1 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772162 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:15:46.215 10.0.0.2 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@38 -- # ping_ips 1 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:15:46.215 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:46.215 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.392 ms 00:15:46.215 00:15:46.215 --- 10.0.0.1 ping statistics --- 00:15:46.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.215 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=target0 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:15:46.215 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:46.215 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:15:46.215 00:15:46.215 --- 10.0.0.2 ping statistics --- 00:15:46.215 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:46.215 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair++ )) 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # return 0 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:15:46.215 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=initiator1 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # return 1 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev= 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@160 -- # return 0 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=target0 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # get_net_dev target1 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # local dev=target1 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # return 1 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@159 -- # dev= 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@160 -- # return 0 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:15:46.216 ' 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # nvmfpid=1648922 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # waitforlisten 1648922 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 1648922 ']' 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:46.216 [2024-11-20 08:12:59.550595] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:15:46.216 [2024-11-20 08:12:59.550645] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:46.216 [2024-11-20 08:12:59.630753] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:46.216 [2024-11-20 08:12:59.671749] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:46.216 [2024-11-20 08:12:59.671784] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:46.216 [2024-11-20 08:12:59.671791] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:46.216 [2024-11-20 08:12:59.671797] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:46.216 [2024-11-20 08:12:59.671802] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:46.216 [2024-11-20 08:12:59.672375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:46.216 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:46.217 [2024-11-20 08:12:59.807527] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:46.217 [2024-11-20 08:12:59.827701] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:46.217 NULL1 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.217 08:12:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:15:46.217 [2024-11-20 08:12:59.885360] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:15:46.217 [2024-11-20 08:12:59.885393] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1648951 ] 00:15:46.476 Attached to nqn.2016-06.io.spdk:cnode1 00:15:46.476 Namespace ID: 1 size: 1GB 00:15:46.476 fused_ordering(0) 00:15:46.476 fused_ordering(1) 00:15:46.476 fused_ordering(2) 00:15:46.476 fused_ordering(3) 00:15:46.476 fused_ordering(4) 00:15:46.476 fused_ordering(5) 00:15:46.476 fused_ordering(6) 00:15:46.476 fused_ordering(7) 00:15:46.476 fused_ordering(8) 00:15:46.476 fused_ordering(9) 00:15:46.476 fused_ordering(10) 00:15:46.476 fused_ordering(11) 00:15:46.476 fused_ordering(12) 00:15:46.476 fused_ordering(13) 00:15:46.476 fused_ordering(14) 00:15:46.476 fused_ordering(15) 00:15:46.476 fused_ordering(16) 00:15:46.476 fused_ordering(17) 00:15:46.476 fused_ordering(18) 00:15:46.476 fused_ordering(19) 00:15:46.476 fused_ordering(20) 00:15:46.476 fused_ordering(21) 00:15:46.476 fused_ordering(22) 00:15:46.476 fused_ordering(23) 00:15:46.476 fused_ordering(24) 00:15:46.476 fused_ordering(25) 00:15:46.476 fused_ordering(26) 00:15:46.476 fused_ordering(27) 00:15:46.476 fused_ordering(28) 00:15:46.476 fused_ordering(29) 00:15:46.476 fused_ordering(30) 00:15:46.476 fused_ordering(31) 00:15:46.476 fused_ordering(32) 00:15:46.476 fused_ordering(33) 00:15:46.476 fused_ordering(34) 00:15:46.476 fused_ordering(35) 00:15:46.476 fused_ordering(36) 00:15:46.476 fused_ordering(37) 00:15:46.476 fused_ordering(38) 00:15:46.476 fused_ordering(39) 00:15:46.476 fused_ordering(40) 00:15:46.476 fused_ordering(41) 00:15:46.476 fused_ordering(42) 00:15:46.476 fused_ordering(43) 00:15:46.476 fused_ordering(44) 00:15:46.476 fused_ordering(45) 00:15:46.476 fused_ordering(46) 00:15:46.476 fused_ordering(47) 00:15:46.476 fused_ordering(48) 00:15:46.476 fused_ordering(49) 00:15:46.476 fused_ordering(50) 00:15:46.476 fused_ordering(51) 00:15:46.476 fused_ordering(52) 00:15:46.476 fused_ordering(53) 00:15:46.476 fused_ordering(54) 00:15:46.476 fused_ordering(55) 00:15:46.476 fused_ordering(56) 00:15:46.476 fused_ordering(57) 00:15:46.476 fused_ordering(58) 00:15:46.476 fused_ordering(59) 00:15:46.476 fused_ordering(60) 00:15:46.476 fused_ordering(61) 00:15:46.476 fused_ordering(62) 00:15:46.476 fused_ordering(63) 00:15:46.476 fused_ordering(64) 00:15:46.476 fused_ordering(65) 00:15:46.476 fused_ordering(66) 00:15:46.476 fused_ordering(67) 00:15:46.476 fused_ordering(68) 00:15:46.476 fused_ordering(69) 00:15:46.476 fused_ordering(70) 00:15:46.476 fused_ordering(71) 00:15:46.476 fused_ordering(72) 00:15:46.476 fused_ordering(73) 00:15:46.477 fused_ordering(74) 00:15:46.477 fused_ordering(75) 00:15:46.477 fused_ordering(76) 00:15:46.477 fused_ordering(77) 00:15:46.477 fused_ordering(78) 00:15:46.477 fused_ordering(79) 00:15:46.477 fused_ordering(80) 00:15:46.477 fused_ordering(81) 00:15:46.477 fused_ordering(82) 00:15:46.477 fused_ordering(83) 00:15:46.477 fused_ordering(84) 00:15:46.477 fused_ordering(85) 00:15:46.477 fused_ordering(86) 00:15:46.477 fused_ordering(87) 00:15:46.477 fused_ordering(88) 00:15:46.477 fused_ordering(89) 00:15:46.477 fused_ordering(90) 00:15:46.477 fused_ordering(91) 00:15:46.477 fused_ordering(92) 00:15:46.477 fused_ordering(93) 00:15:46.477 fused_ordering(94) 00:15:46.477 fused_ordering(95) 00:15:46.477 fused_ordering(96) 00:15:46.477 fused_ordering(97) 00:15:46.477 fused_ordering(98) 00:15:46.477 fused_ordering(99) 00:15:46.477 fused_ordering(100) 00:15:46.477 fused_ordering(101) 00:15:46.477 fused_ordering(102) 00:15:46.477 fused_ordering(103) 00:15:46.477 fused_ordering(104) 00:15:46.477 fused_ordering(105) 00:15:46.477 fused_ordering(106) 00:15:46.477 fused_ordering(107) 00:15:46.477 fused_ordering(108) 00:15:46.477 fused_ordering(109) 00:15:46.477 fused_ordering(110) 00:15:46.477 fused_ordering(111) 00:15:46.477 fused_ordering(112) 00:15:46.477 fused_ordering(113) 00:15:46.477 fused_ordering(114) 00:15:46.477 fused_ordering(115) 00:15:46.477 fused_ordering(116) 00:15:46.477 fused_ordering(117) 00:15:46.477 fused_ordering(118) 00:15:46.477 fused_ordering(119) 00:15:46.477 fused_ordering(120) 00:15:46.477 fused_ordering(121) 00:15:46.477 fused_ordering(122) 00:15:46.477 fused_ordering(123) 00:15:46.477 fused_ordering(124) 00:15:46.477 fused_ordering(125) 00:15:46.477 fused_ordering(126) 00:15:46.477 fused_ordering(127) 00:15:46.477 fused_ordering(128) 00:15:46.477 fused_ordering(129) 00:15:46.477 fused_ordering(130) 00:15:46.477 fused_ordering(131) 00:15:46.477 fused_ordering(132) 00:15:46.477 fused_ordering(133) 00:15:46.477 fused_ordering(134) 00:15:46.477 fused_ordering(135) 00:15:46.477 fused_ordering(136) 00:15:46.477 fused_ordering(137) 00:15:46.477 fused_ordering(138) 00:15:46.477 fused_ordering(139) 00:15:46.477 fused_ordering(140) 00:15:46.477 fused_ordering(141) 00:15:46.477 fused_ordering(142) 00:15:46.477 fused_ordering(143) 00:15:46.477 fused_ordering(144) 00:15:46.477 fused_ordering(145) 00:15:46.477 fused_ordering(146) 00:15:46.477 fused_ordering(147) 00:15:46.477 fused_ordering(148) 00:15:46.477 fused_ordering(149) 00:15:46.477 fused_ordering(150) 00:15:46.477 fused_ordering(151) 00:15:46.477 fused_ordering(152) 00:15:46.477 fused_ordering(153) 00:15:46.477 fused_ordering(154) 00:15:46.477 fused_ordering(155) 00:15:46.477 fused_ordering(156) 00:15:46.477 fused_ordering(157) 00:15:46.477 fused_ordering(158) 00:15:46.477 fused_ordering(159) 00:15:46.477 fused_ordering(160) 00:15:46.477 fused_ordering(161) 00:15:46.477 fused_ordering(162) 00:15:46.477 fused_ordering(163) 00:15:46.477 fused_ordering(164) 00:15:46.477 fused_ordering(165) 00:15:46.477 fused_ordering(166) 00:15:46.477 fused_ordering(167) 00:15:46.477 fused_ordering(168) 00:15:46.477 fused_ordering(169) 00:15:46.477 fused_ordering(170) 00:15:46.477 fused_ordering(171) 00:15:46.477 fused_ordering(172) 00:15:46.477 fused_ordering(173) 00:15:46.477 fused_ordering(174) 00:15:46.477 fused_ordering(175) 00:15:46.477 fused_ordering(176) 00:15:46.477 fused_ordering(177) 00:15:46.477 fused_ordering(178) 00:15:46.477 fused_ordering(179) 00:15:46.477 fused_ordering(180) 00:15:46.477 fused_ordering(181) 00:15:46.477 fused_ordering(182) 00:15:46.477 fused_ordering(183) 00:15:46.477 fused_ordering(184) 00:15:46.477 fused_ordering(185) 00:15:46.477 fused_ordering(186) 00:15:46.477 fused_ordering(187) 00:15:46.477 fused_ordering(188) 00:15:46.477 fused_ordering(189) 00:15:46.477 fused_ordering(190) 00:15:46.477 fused_ordering(191) 00:15:46.477 fused_ordering(192) 00:15:46.477 fused_ordering(193) 00:15:46.477 fused_ordering(194) 00:15:46.477 fused_ordering(195) 00:15:46.477 fused_ordering(196) 00:15:46.477 fused_ordering(197) 00:15:46.477 fused_ordering(198) 00:15:46.477 fused_ordering(199) 00:15:46.477 fused_ordering(200) 00:15:46.477 fused_ordering(201) 00:15:46.477 fused_ordering(202) 00:15:46.477 fused_ordering(203) 00:15:46.477 fused_ordering(204) 00:15:46.477 fused_ordering(205) 00:15:46.736 fused_ordering(206) 00:15:46.736 fused_ordering(207) 00:15:46.736 fused_ordering(208) 00:15:46.736 fused_ordering(209) 00:15:46.736 fused_ordering(210) 00:15:46.736 fused_ordering(211) 00:15:46.736 fused_ordering(212) 00:15:46.736 fused_ordering(213) 00:15:46.736 fused_ordering(214) 00:15:46.736 fused_ordering(215) 00:15:46.736 fused_ordering(216) 00:15:46.736 fused_ordering(217) 00:15:46.736 fused_ordering(218) 00:15:46.736 fused_ordering(219) 00:15:46.736 fused_ordering(220) 00:15:46.736 fused_ordering(221) 00:15:46.736 fused_ordering(222) 00:15:46.736 fused_ordering(223) 00:15:46.736 fused_ordering(224) 00:15:46.736 fused_ordering(225) 00:15:46.736 fused_ordering(226) 00:15:46.736 fused_ordering(227) 00:15:46.736 fused_ordering(228) 00:15:46.736 fused_ordering(229) 00:15:46.736 fused_ordering(230) 00:15:46.736 fused_ordering(231) 00:15:46.736 fused_ordering(232) 00:15:46.736 fused_ordering(233) 00:15:46.736 fused_ordering(234) 00:15:46.736 fused_ordering(235) 00:15:46.736 fused_ordering(236) 00:15:46.736 fused_ordering(237) 00:15:46.736 fused_ordering(238) 00:15:46.736 fused_ordering(239) 00:15:46.736 fused_ordering(240) 00:15:46.736 fused_ordering(241) 00:15:46.736 fused_ordering(242) 00:15:46.736 fused_ordering(243) 00:15:46.736 fused_ordering(244) 00:15:46.736 fused_ordering(245) 00:15:46.736 fused_ordering(246) 00:15:46.736 fused_ordering(247) 00:15:46.736 fused_ordering(248) 00:15:46.736 fused_ordering(249) 00:15:46.736 fused_ordering(250) 00:15:46.736 fused_ordering(251) 00:15:46.736 fused_ordering(252) 00:15:46.736 fused_ordering(253) 00:15:46.736 fused_ordering(254) 00:15:46.736 fused_ordering(255) 00:15:46.736 fused_ordering(256) 00:15:46.736 fused_ordering(257) 00:15:46.736 fused_ordering(258) 00:15:46.736 fused_ordering(259) 00:15:46.736 fused_ordering(260) 00:15:46.736 fused_ordering(261) 00:15:46.736 fused_ordering(262) 00:15:46.736 fused_ordering(263) 00:15:46.736 fused_ordering(264) 00:15:46.736 fused_ordering(265) 00:15:46.736 fused_ordering(266) 00:15:46.736 fused_ordering(267) 00:15:46.736 fused_ordering(268) 00:15:46.736 fused_ordering(269) 00:15:46.736 fused_ordering(270) 00:15:46.736 fused_ordering(271) 00:15:46.736 fused_ordering(272) 00:15:46.736 fused_ordering(273) 00:15:46.736 fused_ordering(274) 00:15:46.736 fused_ordering(275) 00:15:46.736 fused_ordering(276) 00:15:46.736 fused_ordering(277) 00:15:46.736 fused_ordering(278) 00:15:46.736 fused_ordering(279) 00:15:46.736 fused_ordering(280) 00:15:46.736 fused_ordering(281) 00:15:46.736 fused_ordering(282) 00:15:46.736 fused_ordering(283) 00:15:46.736 fused_ordering(284) 00:15:46.736 fused_ordering(285) 00:15:46.736 fused_ordering(286) 00:15:46.736 fused_ordering(287) 00:15:46.737 fused_ordering(288) 00:15:46.737 fused_ordering(289) 00:15:46.737 fused_ordering(290) 00:15:46.737 fused_ordering(291) 00:15:46.737 fused_ordering(292) 00:15:46.737 fused_ordering(293) 00:15:46.737 fused_ordering(294) 00:15:46.737 fused_ordering(295) 00:15:46.737 fused_ordering(296) 00:15:46.737 fused_ordering(297) 00:15:46.737 fused_ordering(298) 00:15:46.737 fused_ordering(299) 00:15:46.737 fused_ordering(300) 00:15:46.737 fused_ordering(301) 00:15:46.737 fused_ordering(302) 00:15:46.737 fused_ordering(303) 00:15:46.737 fused_ordering(304) 00:15:46.737 fused_ordering(305) 00:15:46.737 fused_ordering(306) 00:15:46.737 fused_ordering(307) 00:15:46.737 fused_ordering(308) 00:15:46.737 fused_ordering(309) 00:15:46.737 fused_ordering(310) 00:15:46.737 fused_ordering(311) 00:15:46.737 fused_ordering(312) 00:15:46.737 fused_ordering(313) 00:15:46.737 fused_ordering(314) 00:15:46.737 fused_ordering(315) 00:15:46.737 fused_ordering(316) 00:15:46.737 fused_ordering(317) 00:15:46.737 fused_ordering(318) 00:15:46.737 fused_ordering(319) 00:15:46.737 fused_ordering(320) 00:15:46.737 fused_ordering(321) 00:15:46.737 fused_ordering(322) 00:15:46.737 fused_ordering(323) 00:15:46.737 fused_ordering(324) 00:15:46.737 fused_ordering(325) 00:15:46.737 fused_ordering(326) 00:15:46.737 fused_ordering(327) 00:15:46.737 fused_ordering(328) 00:15:46.737 fused_ordering(329) 00:15:46.737 fused_ordering(330) 00:15:46.737 fused_ordering(331) 00:15:46.737 fused_ordering(332) 00:15:46.737 fused_ordering(333) 00:15:46.737 fused_ordering(334) 00:15:46.737 fused_ordering(335) 00:15:46.737 fused_ordering(336) 00:15:46.737 fused_ordering(337) 00:15:46.737 fused_ordering(338) 00:15:46.737 fused_ordering(339) 00:15:46.737 fused_ordering(340) 00:15:46.737 fused_ordering(341) 00:15:46.737 fused_ordering(342) 00:15:46.737 fused_ordering(343) 00:15:46.737 fused_ordering(344) 00:15:46.737 fused_ordering(345) 00:15:46.737 fused_ordering(346) 00:15:46.737 fused_ordering(347) 00:15:46.737 fused_ordering(348) 00:15:46.737 fused_ordering(349) 00:15:46.737 fused_ordering(350) 00:15:46.737 fused_ordering(351) 00:15:46.737 fused_ordering(352) 00:15:46.737 fused_ordering(353) 00:15:46.737 fused_ordering(354) 00:15:46.737 fused_ordering(355) 00:15:46.737 fused_ordering(356) 00:15:46.737 fused_ordering(357) 00:15:46.737 fused_ordering(358) 00:15:46.737 fused_ordering(359) 00:15:46.737 fused_ordering(360) 00:15:46.737 fused_ordering(361) 00:15:46.737 fused_ordering(362) 00:15:46.737 fused_ordering(363) 00:15:46.737 fused_ordering(364) 00:15:46.737 fused_ordering(365) 00:15:46.737 fused_ordering(366) 00:15:46.737 fused_ordering(367) 00:15:46.737 fused_ordering(368) 00:15:46.737 fused_ordering(369) 00:15:46.737 fused_ordering(370) 00:15:46.737 fused_ordering(371) 00:15:46.737 fused_ordering(372) 00:15:46.737 fused_ordering(373) 00:15:46.737 fused_ordering(374) 00:15:46.737 fused_ordering(375) 00:15:46.737 fused_ordering(376) 00:15:46.737 fused_ordering(377) 00:15:46.737 fused_ordering(378) 00:15:46.737 fused_ordering(379) 00:15:46.737 fused_ordering(380) 00:15:46.737 fused_ordering(381) 00:15:46.737 fused_ordering(382) 00:15:46.737 fused_ordering(383) 00:15:46.737 fused_ordering(384) 00:15:46.737 fused_ordering(385) 00:15:46.737 fused_ordering(386) 00:15:46.737 fused_ordering(387) 00:15:46.737 fused_ordering(388) 00:15:46.737 fused_ordering(389) 00:15:46.737 fused_ordering(390) 00:15:46.737 fused_ordering(391) 00:15:46.737 fused_ordering(392) 00:15:46.737 fused_ordering(393) 00:15:46.737 fused_ordering(394) 00:15:46.737 fused_ordering(395) 00:15:46.737 fused_ordering(396) 00:15:46.737 fused_ordering(397) 00:15:46.737 fused_ordering(398) 00:15:46.737 fused_ordering(399) 00:15:46.737 fused_ordering(400) 00:15:46.737 fused_ordering(401) 00:15:46.737 fused_ordering(402) 00:15:46.737 fused_ordering(403) 00:15:46.737 fused_ordering(404) 00:15:46.737 fused_ordering(405) 00:15:46.737 fused_ordering(406) 00:15:46.737 fused_ordering(407) 00:15:46.737 fused_ordering(408) 00:15:46.737 fused_ordering(409) 00:15:46.737 fused_ordering(410) 00:15:46.996 fused_ordering(411) 00:15:46.996 fused_ordering(412) 00:15:46.996 fused_ordering(413) 00:15:46.996 fused_ordering(414) 00:15:46.996 fused_ordering(415) 00:15:46.996 fused_ordering(416) 00:15:46.996 fused_ordering(417) 00:15:46.996 fused_ordering(418) 00:15:46.997 fused_ordering(419) 00:15:46.997 fused_ordering(420) 00:15:46.997 fused_ordering(421) 00:15:46.997 fused_ordering(422) 00:15:46.997 fused_ordering(423) 00:15:46.997 fused_ordering(424) 00:15:46.997 fused_ordering(425) 00:15:46.997 fused_ordering(426) 00:15:46.997 fused_ordering(427) 00:15:46.997 fused_ordering(428) 00:15:46.997 fused_ordering(429) 00:15:46.997 fused_ordering(430) 00:15:46.997 fused_ordering(431) 00:15:46.997 fused_ordering(432) 00:15:46.997 fused_ordering(433) 00:15:46.997 fused_ordering(434) 00:15:46.997 fused_ordering(435) 00:15:46.997 fused_ordering(436) 00:15:46.997 fused_ordering(437) 00:15:46.997 fused_ordering(438) 00:15:46.997 fused_ordering(439) 00:15:46.997 fused_ordering(440) 00:15:46.997 fused_ordering(441) 00:15:46.997 fused_ordering(442) 00:15:46.997 fused_ordering(443) 00:15:46.997 fused_ordering(444) 00:15:46.997 fused_ordering(445) 00:15:46.997 fused_ordering(446) 00:15:46.997 fused_ordering(447) 00:15:46.997 fused_ordering(448) 00:15:46.997 fused_ordering(449) 00:15:46.997 fused_ordering(450) 00:15:46.997 fused_ordering(451) 00:15:46.997 fused_ordering(452) 00:15:46.997 fused_ordering(453) 00:15:46.997 fused_ordering(454) 00:15:46.997 fused_ordering(455) 00:15:46.997 fused_ordering(456) 00:15:46.997 fused_ordering(457) 00:15:46.997 fused_ordering(458) 00:15:46.997 fused_ordering(459) 00:15:46.997 fused_ordering(460) 00:15:46.997 fused_ordering(461) 00:15:46.997 fused_ordering(462) 00:15:46.997 fused_ordering(463) 00:15:46.997 fused_ordering(464) 00:15:46.997 fused_ordering(465) 00:15:46.997 fused_ordering(466) 00:15:46.997 fused_ordering(467) 00:15:46.997 fused_ordering(468) 00:15:46.997 fused_ordering(469) 00:15:46.997 fused_ordering(470) 00:15:46.997 fused_ordering(471) 00:15:46.997 fused_ordering(472) 00:15:46.997 fused_ordering(473) 00:15:46.997 fused_ordering(474) 00:15:46.997 fused_ordering(475) 00:15:46.997 fused_ordering(476) 00:15:46.997 fused_ordering(477) 00:15:46.997 fused_ordering(478) 00:15:46.997 fused_ordering(479) 00:15:46.997 fused_ordering(480) 00:15:46.997 fused_ordering(481) 00:15:46.997 fused_ordering(482) 00:15:46.997 fused_ordering(483) 00:15:46.997 fused_ordering(484) 00:15:46.997 fused_ordering(485) 00:15:46.997 fused_ordering(486) 00:15:46.997 fused_ordering(487) 00:15:46.997 fused_ordering(488) 00:15:46.997 fused_ordering(489) 00:15:46.997 fused_ordering(490) 00:15:46.997 fused_ordering(491) 00:15:46.997 fused_ordering(492) 00:15:46.997 fused_ordering(493) 00:15:46.997 fused_ordering(494) 00:15:46.997 fused_ordering(495) 00:15:46.997 fused_ordering(496) 00:15:46.997 fused_ordering(497) 00:15:46.997 fused_ordering(498) 00:15:46.997 fused_ordering(499) 00:15:46.997 fused_ordering(500) 00:15:46.997 fused_ordering(501) 00:15:46.997 fused_ordering(502) 00:15:46.997 fused_ordering(503) 00:15:46.997 fused_ordering(504) 00:15:46.997 fused_ordering(505) 00:15:46.997 fused_ordering(506) 00:15:46.997 fused_ordering(507) 00:15:46.997 fused_ordering(508) 00:15:46.997 fused_ordering(509) 00:15:46.997 fused_ordering(510) 00:15:46.997 fused_ordering(511) 00:15:46.997 fused_ordering(512) 00:15:46.997 fused_ordering(513) 00:15:46.997 fused_ordering(514) 00:15:46.997 fused_ordering(515) 00:15:46.997 fused_ordering(516) 00:15:46.997 fused_ordering(517) 00:15:46.997 fused_ordering(518) 00:15:46.997 fused_ordering(519) 00:15:46.997 fused_ordering(520) 00:15:46.997 fused_ordering(521) 00:15:46.997 fused_ordering(522) 00:15:46.997 fused_ordering(523) 00:15:46.997 fused_ordering(524) 00:15:46.997 fused_ordering(525) 00:15:46.997 fused_ordering(526) 00:15:46.997 fused_ordering(527) 00:15:46.997 fused_ordering(528) 00:15:46.997 fused_ordering(529) 00:15:46.997 fused_ordering(530) 00:15:46.997 fused_ordering(531) 00:15:46.997 fused_ordering(532) 00:15:46.997 fused_ordering(533) 00:15:46.997 fused_ordering(534) 00:15:46.997 fused_ordering(535) 00:15:46.997 fused_ordering(536) 00:15:46.997 fused_ordering(537) 00:15:46.997 fused_ordering(538) 00:15:46.997 fused_ordering(539) 00:15:46.997 fused_ordering(540) 00:15:46.997 fused_ordering(541) 00:15:46.997 fused_ordering(542) 00:15:46.997 fused_ordering(543) 00:15:46.997 fused_ordering(544) 00:15:46.997 fused_ordering(545) 00:15:46.997 fused_ordering(546) 00:15:46.997 fused_ordering(547) 00:15:46.997 fused_ordering(548) 00:15:46.997 fused_ordering(549) 00:15:46.997 fused_ordering(550) 00:15:46.997 fused_ordering(551) 00:15:46.997 fused_ordering(552) 00:15:46.997 fused_ordering(553) 00:15:46.997 fused_ordering(554) 00:15:46.997 fused_ordering(555) 00:15:46.997 fused_ordering(556) 00:15:46.997 fused_ordering(557) 00:15:46.997 fused_ordering(558) 00:15:46.997 fused_ordering(559) 00:15:46.997 fused_ordering(560) 00:15:46.997 fused_ordering(561) 00:15:46.997 fused_ordering(562) 00:15:46.997 fused_ordering(563) 00:15:46.997 fused_ordering(564) 00:15:46.997 fused_ordering(565) 00:15:46.997 fused_ordering(566) 00:15:46.997 fused_ordering(567) 00:15:46.997 fused_ordering(568) 00:15:46.997 fused_ordering(569) 00:15:46.997 fused_ordering(570) 00:15:46.997 fused_ordering(571) 00:15:46.997 fused_ordering(572) 00:15:46.997 fused_ordering(573) 00:15:46.997 fused_ordering(574) 00:15:46.997 fused_ordering(575) 00:15:46.997 fused_ordering(576) 00:15:46.997 fused_ordering(577) 00:15:46.997 fused_ordering(578) 00:15:46.997 fused_ordering(579) 00:15:46.997 fused_ordering(580) 00:15:46.997 fused_ordering(581) 00:15:46.997 fused_ordering(582) 00:15:46.997 fused_ordering(583) 00:15:46.997 fused_ordering(584) 00:15:46.997 fused_ordering(585) 00:15:46.997 fused_ordering(586) 00:15:46.997 fused_ordering(587) 00:15:46.997 fused_ordering(588) 00:15:46.997 fused_ordering(589) 00:15:46.997 fused_ordering(590) 00:15:46.997 fused_ordering(591) 00:15:46.997 fused_ordering(592) 00:15:46.997 fused_ordering(593) 00:15:46.997 fused_ordering(594) 00:15:46.997 fused_ordering(595) 00:15:46.997 fused_ordering(596) 00:15:46.997 fused_ordering(597) 00:15:46.997 fused_ordering(598) 00:15:46.997 fused_ordering(599) 00:15:46.997 fused_ordering(600) 00:15:46.997 fused_ordering(601) 00:15:46.997 fused_ordering(602) 00:15:46.997 fused_ordering(603) 00:15:46.997 fused_ordering(604) 00:15:46.997 fused_ordering(605) 00:15:46.997 fused_ordering(606) 00:15:46.997 fused_ordering(607) 00:15:46.997 fused_ordering(608) 00:15:46.997 fused_ordering(609) 00:15:46.997 fused_ordering(610) 00:15:46.997 fused_ordering(611) 00:15:46.997 fused_ordering(612) 00:15:46.997 fused_ordering(613) 00:15:46.997 fused_ordering(614) 00:15:46.997 fused_ordering(615) 00:15:47.256 fused_ordering(616) 00:15:47.256 fused_ordering(617) 00:15:47.256 fused_ordering(618) 00:15:47.256 fused_ordering(619) 00:15:47.256 fused_ordering(620) 00:15:47.256 fused_ordering(621) 00:15:47.256 fused_ordering(622) 00:15:47.256 fused_ordering(623) 00:15:47.256 fused_ordering(624) 00:15:47.256 fused_ordering(625) 00:15:47.256 fused_ordering(626) 00:15:47.256 fused_ordering(627) 00:15:47.256 fused_ordering(628) 00:15:47.256 fused_ordering(629) 00:15:47.256 fused_ordering(630) 00:15:47.256 fused_ordering(631) 00:15:47.256 fused_ordering(632) 00:15:47.256 fused_ordering(633) 00:15:47.256 fused_ordering(634) 00:15:47.256 fused_ordering(635) 00:15:47.256 fused_ordering(636) 00:15:47.256 fused_ordering(637) 00:15:47.256 fused_ordering(638) 00:15:47.256 fused_ordering(639) 00:15:47.256 fused_ordering(640) 00:15:47.257 fused_ordering(641) 00:15:47.257 fused_ordering(642) 00:15:47.257 fused_ordering(643) 00:15:47.257 fused_ordering(644) 00:15:47.257 fused_ordering(645) 00:15:47.257 fused_ordering(646) 00:15:47.257 fused_ordering(647) 00:15:47.257 fused_ordering(648) 00:15:47.257 fused_ordering(649) 00:15:47.257 fused_ordering(650) 00:15:47.257 fused_ordering(651) 00:15:47.257 fused_ordering(652) 00:15:47.257 fused_ordering(653) 00:15:47.257 fused_ordering(654) 00:15:47.257 fused_ordering(655) 00:15:47.257 fused_ordering(656) 00:15:47.257 fused_ordering(657) 00:15:47.257 fused_ordering(658) 00:15:47.257 fused_ordering(659) 00:15:47.257 fused_ordering(660) 00:15:47.257 fused_ordering(661) 00:15:47.257 fused_ordering(662) 00:15:47.257 fused_ordering(663) 00:15:47.257 fused_ordering(664) 00:15:47.257 fused_ordering(665) 00:15:47.257 fused_ordering(666) 00:15:47.257 fused_ordering(667) 00:15:47.257 fused_ordering(668) 00:15:47.257 fused_ordering(669) 00:15:47.257 fused_ordering(670) 00:15:47.257 fused_ordering(671) 00:15:47.257 fused_ordering(672) 00:15:47.257 fused_ordering(673) 00:15:47.257 fused_ordering(674) 00:15:47.257 fused_ordering(675) 00:15:47.257 fused_ordering(676) 00:15:47.257 fused_ordering(677) 00:15:47.257 fused_ordering(678) 00:15:47.257 fused_ordering(679) 00:15:47.257 fused_ordering(680) 00:15:47.257 fused_ordering(681) 00:15:47.257 fused_ordering(682) 00:15:47.257 fused_ordering(683) 00:15:47.257 fused_ordering(684) 00:15:47.257 fused_ordering(685) 00:15:47.257 fused_ordering(686) 00:15:47.257 fused_ordering(687) 00:15:47.257 fused_ordering(688) 00:15:47.257 fused_ordering(689) 00:15:47.257 fused_ordering(690) 00:15:47.257 fused_ordering(691) 00:15:47.257 fused_ordering(692) 00:15:47.257 fused_ordering(693) 00:15:47.257 fused_ordering(694) 00:15:47.257 fused_ordering(695) 00:15:47.257 fused_ordering(696) 00:15:47.257 fused_ordering(697) 00:15:47.257 fused_ordering(698) 00:15:47.257 fused_ordering(699) 00:15:47.257 fused_ordering(700) 00:15:47.257 fused_ordering(701) 00:15:47.257 fused_ordering(702) 00:15:47.257 fused_ordering(703) 00:15:47.257 fused_ordering(704) 00:15:47.257 fused_ordering(705) 00:15:47.257 fused_ordering(706) 00:15:47.257 fused_ordering(707) 00:15:47.257 fused_ordering(708) 00:15:47.257 fused_ordering(709) 00:15:47.257 fused_ordering(710) 00:15:47.257 fused_ordering(711) 00:15:47.257 fused_ordering(712) 00:15:47.257 fused_ordering(713) 00:15:47.257 fused_ordering(714) 00:15:47.257 fused_ordering(715) 00:15:47.257 fused_ordering(716) 00:15:47.257 fused_ordering(717) 00:15:47.257 fused_ordering(718) 00:15:47.257 fused_ordering(719) 00:15:47.257 fused_ordering(720) 00:15:47.257 fused_ordering(721) 00:15:47.257 fused_ordering(722) 00:15:47.257 fused_ordering(723) 00:15:47.257 fused_ordering(724) 00:15:47.257 fused_ordering(725) 00:15:47.257 fused_ordering(726) 00:15:47.257 fused_ordering(727) 00:15:47.257 fused_ordering(728) 00:15:47.257 fused_ordering(729) 00:15:47.257 fused_ordering(730) 00:15:47.257 fused_ordering(731) 00:15:47.257 fused_ordering(732) 00:15:47.257 fused_ordering(733) 00:15:47.257 fused_ordering(734) 00:15:47.257 fused_ordering(735) 00:15:47.257 fused_ordering(736) 00:15:47.257 fused_ordering(737) 00:15:47.257 fused_ordering(738) 00:15:47.257 fused_ordering(739) 00:15:47.257 fused_ordering(740) 00:15:47.257 fused_ordering(741) 00:15:47.257 fused_ordering(742) 00:15:47.257 fused_ordering(743) 00:15:47.257 fused_ordering(744) 00:15:47.257 fused_ordering(745) 00:15:47.257 fused_ordering(746) 00:15:47.257 fused_ordering(747) 00:15:47.257 fused_ordering(748) 00:15:47.257 fused_ordering(749) 00:15:47.257 fused_ordering(750) 00:15:47.257 fused_ordering(751) 00:15:47.257 fused_ordering(752) 00:15:47.257 fused_ordering(753) 00:15:47.257 fused_ordering(754) 00:15:47.257 fused_ordering(755) 00:15:47.257 fused_ordering(756) 00:15:47.257 fused_ordering(757) 00:15:47.257 fused_ordering(758) 00:15:47.257 fused_ordering(759) 00:15:47.257 fused_ordering(760) 00:15:47.257 fused_ordering(761) 00:15:47.257 fused_ordering(762) 00:15:47.257 fused_ordering(763) 00:15:47.257 fused_ordering(764) 00:15:47.257 fused_ordering(765) 00:15:47.257 fused_ordering(766) 00:15:47.257 fused_ordering(767) 00:15:47.257 fused_ordering(768) 00:15:47.257 fused_ordering(769) 00:15:47.257 fused_ordering(770) 00:15:47.257 fused_ordering(771) 00:15:47.257 fused_ordering(772) 00:15:47.257 fused_ordering(773) 00:15:47.257 fused_ordering(774) 00:15:47.257 fused_ordering(775) 00:15:47.257 fused_ordering(776) 00:15:47.257 fused_ordering(777) 00:15:47.257 fused_ordering(778) 00:15:47.257 fused_ordering(779) 00:15:47.257 fused_ordering(780) 00:15:47.257 fused_ordering(781) 00:15:47.257 fused_ordering(782) 00:15:47.257 fused_ordering(783) 00:15:47.257 fused_ordering(784) 00:15:47.257 fused_ordering(785) 00:15:47.257 fused_ordering(786) 00:15:47.257 fused_ordering(787) 00:15:47.257 fused_ordering(788) 00:15:47.257 fused_ordering(789) 00:15:47.257 fused_ordering(790) 00:15:47.257 fused_ordering(791) 00:15:47.257 fused_ordering(792) 00:15:47.257 fused_ordering(793) 00:15:47.257 fused_ordering(794) 00:15:47.257 fused_ordering(795) 00:15:47.257 fused_ordering(796) 00:15:47.257 fused_ordering(797) 00:15:47.257 fused_ordering(798) 00:15:47.257 fused_ordering(799) 00:15:47.257 fused_ordering(800) 00:15:47.257 fused_ordering(801) 00:15:47.257 fused_ordering(802) 00:15:47.257 fused_ordering(803) 00:15:47.257 fused_ordering(804) 00:15:47.257 fused_ordering(805) 00:15:47.257 fused_ordering(806) 00:15:47.257 fused_ordering(807) 00:15:47.257 fused_ordering(808) 00:15:47.257 fused_ordering(809) 00:15:47.257 fused_ordering(810) 00:15:47.257 fused_ordering(811) 00:15:47.257 fused_ordering(812) 00:15:47.257 fused_ordering(813) 00:15:47.257 fused_ordering(814) 00:15:47.257 fused_ordering(815) 00:15:47.257 fused_ordering(816) 00:15:47.257 fused_ordering(817) 00:15:47.257 fused_ordering(818) 00:15:47.257 fused_ordering(819) 00:15:47.257 fused_ordering(820) 00:15:47.825 fused_ordering(821) 00:15:47.825 fused_ordering(822) 00:15:47.825 fused_ordering(823) 00:15:47.825 fused_ordering(824) 00:15:47.825 fused_ordering(825) 00:15:47.825 fused_ordering(826) 00:15:47.825 fused_ordering(827) 00:15:47.825 fused_ordering(828) 00:15:47.825 fused_ordering(829) 00:15:47.825 fused_ordering(830) 00:15:47.825 fused_ordering(831) 00:15:47.825 fused_ordering(832) 00:15:47.825 fused_ordering(833) 00:15:47.825 fused_ordering(834) 00:15:47.825 fused_ordering(835) 00:15:47.825 fused_ordering(836) 00:15:47.825 fused_ordering(837) 00:15:47.825 fused_ordering(838) 00:15:47.825 fused_ordering(839) 00:15:47.825 fused_ordering(840) 00:15:47.825 fused_ordering(841) 00:15:47.825 fused_ordering(842) 00:15:47.825 fused_ordering(843) 00:15:47.825 fused_ordering(844) 00:15:47.825 fused_ordering(845) 00:15:47.825 fused_ordering(846) 00:15:47.825 fused_ordering(847) 00:15:47.825 fused_ordering(848) 00:15:47.825 fused_ordering(849) 00:15:47.825 fused_ordering(850) 00:15:47.825 fused_ordering(851) 00:15:47.825 fused_ordering(852) 00:15:47.825 fused_ordering(853) 00:15:47.825 fused_ordering(854) 00:15:47.825 fused_ordering(855) 00:15:47.825 fused_ordering(856) 00:15:47.825 fused_ordering(857) 00:15:47.825 fused_ordering(858) 00:15:47.825 fused_ordering(859) 00:15:47.825 fused_ordering(860) 00:15:47.825 fused_ordering(861) 00:15:47.825 fused_ordering(862) 00:15:47.825 fused_ordering(863) 00:15:47.825 fused_ordering(864) 00:15:47.825 fused_ordering(865) 00:15:47.825 fused_ordering(866) 00:15:47.825 fused_ordering(867) 00:15:47.825 fused_ordering(868) 00:15:47.825 fused_ordering(869) 00:15:47.825 fused_ordering(870) 00:15:47.825 fused_ordering(871) 00:15:47.825 fused_ordering(872) 00:15:47.825 fused_ordering(873) 00:15:47.825 fused_ordering(874) 00:15:47.825 fused_ordering(875) 00:15:47.825 fused_ordering(876) 00:15:47.825 fused_ordering(877) 00:15:47.825 fused_ordering(878) 00:15:47.825 fused_ordering(879) 00:15:47.825 fused_ordering(880) 00:15:47.825 fused_ordering(881) 00:15:47.825 fused_ordering(882) 00:15:47.825 fused_ordering(883) 00:15:47.825 fused_ordering(884) 00:15:47.825 fused_ordering(885) 00:15:47.825 fused_ordering(886) 00:15:47.826 fused_ordering(887) 00:15:47.826 fused_ordering(888) 00:15:47.826 fused_ordering(889) 00:15:47.826 fused_ordering(890) 00:15:47.826 fused_ordering(891) 00:15:47.826 fused_ordering(892) 00:15:47.826 fused_ordering(893) 00:15:47.826 fused_ordering(894) 00:15:47.826 fused_ordering(895) 00:15:47.826 fused_ordering(896) 00:15:47.826 fused_ordering(897) 00:15:47.826 fused_ordering(898) 00:15:47.826 fused_ordering(899) 00:15:47.826 fused_ordering(900) 00:15:47.826 fused_ordering(901) 00:15:47.826 fused_ordering(902) 00:15:47.826 fused_ordering(903) 00:15:47.826 fused_ordering(904) 00:15:47.826 fused_ordering(905) 00:15:47.826 fused_ordering(906) 00:15:47.826 fused_ordering(907) 00:15:47.826 fused_ordering(908) 00:15:47.826 fused_ordering(909) 00:15:47.826 fused_ordering(910) 00:15:47.826 fused_ordering(911) 00:15:47.826 fused_ordering(912) 00:15:47.826 fused_ordering(913) 00:15:47.826 fused_ordering(914) 00:15:47.826 fused_ordering(915) 00:15:47.826 fused_ordering(916) 00:15:47.826 fused_ordering(917) 00:15:47.826 fused_ordering(918) 00:15:47.826 fused_ordering(919) 00:15:47.826 fused_ordering(920) 00:15:47.826 fused_ordering(921) 00:15:47.826 fused_ordering(922) 00:15:47.826 fused_ordering(923) 00:15:47.826 fused_ordering(924) 00:15:47.826 fused_ordering(925) 00:15:47.826 fused_ordering(926) 00:15:47.826 fused_ordering(927) 00:15:47.826 fused_ordering(928) 00:15:47.826 fused_ordering(929) 00:15:47.826 fused_ordering(930) 00:15:47.826 fused_ordering(931) 00:15:47.826 fused_ordering(932) 00:15:47.826 fused_ordering(933) 00:15:47.826 fused_ordering(934) 00:15:47.826 fused_ordering(935) 00:15:47.826 fused_ordering(936) 00:15:47.826 fused_ordering(937) 00:15:47.826 fused_ordering(938) 00:15:47.826 fused_ordering(939) 00:15:47.826 fused_ordering(940) 00:15:47.826 fused_ordering(941) 00:15:47.826 fused_ordering(942) 00:15:47.826 fused_ordering(943) 00:15:47.826 fused_ordering(944) 00:15:47.826 fused_ordering(945) 00:15:47.826 fused_ordering(946) 00:15:47.826 fused_ordering(947) 00:15:47.826 fused_ordering(948) 00:15:47.826 fused_ordering(949) 00:15:47.826 fused_ordering(950) 00:15:47.826 fused_ordering(951) 00:15:47.826 fused_ordering(952) 00:15:47.826 fused_ordering(953) 00:15:47.826 fused_ordering(954) 00:15:47.826 fused_ordering(955) 00:15:47.826 fused_ordering(956) 00:15:47.826 fused_ordering(957) 00:15:47.826 fused_ordering(958) 00:15:47.826 fused_ordering(959) 00:15:47.826 fused_ordering(960) 00:15:47.826 fused_ordering(961) 00:15:47.826 fused_ordering(962) 00:15:47.826 fused_ordering(963) 00:15:47.826 fused_ordering(964) 00:15:47.826 fused_ordering(965) 00:15:47.826 fused_ordering(966) 00:15:47.826 fused_ordering(967) 00:15:47.826 fused_ordering(968) 00:15:47.826 fused_ordering(969) 00:15:47.826 fused_ordering(970) 00:15:47.826 fused_ordering(971) 00:15:47.826 fused_ordering(972) 00:15:47.826 fused_ordering(973) 00:15:47.826 fused_ordering(974) 00:15:47.826 fused_ordering(975) 00:15:47.826 fused_ordering(976) 00:15:47.826 fused_ordering(977) 00:15:47.826 fused_ordering(978) 00:15:47.826 fused_ordering(979) 00:15:47.826 fused_ordering(980) 00:15:47.826 fused_ordering(981) 00:15:47.826 fused_ordering(982) 00:15:47.826 fused_ordering(983) 00:15:47.826 fused_ordering(984) 00:15:47.826 fused_ordering(985) 00:15:47.826 fused_ordering(986) 00:15:47.826 fused_ordering(987) 00:15:47.826 fused_ordering(988) 00:15:47.826 fused_ordering(989) 00:15:47.826 fused_ordering(990) 00:15:47.826 fused_ordering(991) 00:15:47.826 fused_ordering(992) 00:15:47.826 fused_ordering(993) 00:15:47.826 fused_ordering(994) 00:15:47.826 fused_ordering(995) 00:15:47.826 fused_ordering(996) 00:15:47.826 fused_ordering(997) 00:15:47.826 fused_ordering(998) 00:15:47.826 fused_ordering(999) 00:15:47.826 fused_ordering(1000) 00:15:47.826 fused_ordering(1001) 00:15:47.826 fused_ordering(1002) 00:15:47.826 fused_ordering(1003) 00:15:47.826 fused_ordering(1004) 00:15:47.826 fused_ordering(1005) 00:15:47.826 fused_ordering(1006) 00:15:47.826 fused_ordering(1007) 00:15:47.826 fused_ordering(1008) 00:15:47.826 fused_ordering(1009) 00:15:47.826 fused_ordering(1010) 00:15:47.826 fused_ordering(1011) 00:15:47.826 fused_ordering(1012) 00:15:47.826 fused_ordering(1013) 00:15:47.826 fused_ordering(1014) 00:15:47.826 fused_ordering(1015) 00:15:47.826 fused_ordering(1016) 00:15:47.826 fused_ordering(1017) 00:15:47.826 fused_ordering(1018) 00:15:47.826 fused_ordering(1019) 00:15:47.826 fused_ordering(1020) 00:15:47.826 fused_ordering(1021) 00:15:47.826 fused_ordering(1022) 00:15:47.826 fused_ordering(1023) 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # nvmfcleanup 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@99 -- # sync 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # set +e 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # for i in {1..20} 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:15:47.826 rmmod nvme_tcp 00:15:47.826 rmmod nvme_fabrics 00:15:47.826 rmmod nvme_keyring 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # set -e 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # return 0 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # '[' -n 1648922 ']' 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@337 -- # killprocess 1648922 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 1648922 ']' 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 1648922 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1648922 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:15:47.826 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1648922' 00:15:47.827 killing process with pid 1648922 00:15:47.827 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 1648922 00:15:47.827 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 1648922 00:15:48.086 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:15:48.086 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # nvmf_fini 00:15:48.086 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@254 -- # local dev 00:15:48.086 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@257 -- # remove_target_ns 00:15:48.086 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:48.086 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:48.086 08:13:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:49.988 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@258 -- # delete_main_bridge 00:15:49.988 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:15:49.988 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@121 -- # return 0 00:15:49.988 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:49.988 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:15:49.988 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:49.988 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:15:49.988 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:15:49.988 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:49.988 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:15:49.988 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:15:49.988 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:15:49.988 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:15:49.988 08:13:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:15:49.988 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:15:49.988 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:15:49.988 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:15:49.988 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:15:49.988 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:15:49.988 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:15:49.988 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # _dev=0 00:15:49.988 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # dev_map=() 00:15:49.988 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@274 -- # iptr 00:15:49.988 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@548 -- # iptables-save 00:15:49.988 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:15:49.988 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@548 -- # iptables-restore 00:15:50.248 00:15:50.248 real 0m10.852s 00:15:50.248 user 0m5.086s 00:15:50.248 sys 0m5.908s 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:15:50.248 ************************************ 00:15:50.248 END TEST nvmf_fused_ordering 00:15:50.248 ************************************ 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:50.248 ************************************ 00:15:50.248 START TEST nvmf_ns_masking 00:15:50.248 ************************************ 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:15:50.248 * Looking for test storage... 00:15:50.248 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:50.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.248 --rc genhtml_branch_coverage=1 00:15:50.248 --rc genhtml_function_coverage=1 00:15:50.248 --rc genhtml_legend=1 00:15:50.248 --rc geninfo_all_blocks=1 00:15:50.248 --rc geninfo_unexecuted_blocks=1 00:15:50.248 00:15:50.248 ' 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:50.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.248 --rc genhtml_branch_coverage=1 00:15:50.248 --rc genhtml_function_coverage=1 00:15:50.248 --rc genhtml_legend=1 00:15:50.248 --rc geninfo_all_blocks=1 00:15:50.248 --rc geninfo_unexecuted_blocks=1 00:15:50.248 00:15:50.248 ' 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:50.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.248 --rc genhtml_branch_coverage=1 00:15:50.248 --rc genhtml_function_coverage=1 00:15:50.248 --rc genhtml_legend=1 00:15:50.248 --rc geninfo_all_blocks=1 00:15:50.248 --rc geninfo_unexecuted_blocks=1 00:15:50.248 00:15:50.248 ' 00:15:50.248 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:50.248 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:50.248 --rc genhtml_branch_coverage=1 00:15:50.248 --rc genhtml_function_coverage=1 00:15:50.249 --rc genhtml_legend=1 00:15:50.249 --rc geninfo_all_blocks=1 00:15:50.249 --rc geninfo_unexecuted_blocks=1 00:15:50.249 00:15:50.249 ' 00:15:50.249 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:50.249 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:15:50.249 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:50.249 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:50.249 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:50.249 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:50.249 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:50.249 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:50.249 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:50.249 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@50 -- # : 0 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:50.509 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=9effce2a-ffc6-4dee-8f89-14e8af207829 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=325d666b-7a13-4da3-9779-46a5da2a0038 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=f4acc0e1-06b9-4f45-912d-3bc329b53d70 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # prepare_net_devs 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # local -g is_hw=no 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # remove_target_ns 00:15:50.509 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:50.510 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:50.510 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:50.510 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:15:50.510 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:15:50.510 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # xtrace_disable 00:15:50.510 08:13:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:57.082 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:57.082 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@131 -- # pci_devs=() 00:15:57.082 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@131 -- # local -a pci_devs 00:15:57.082 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@132 -- # pci_net_devs=() 00:15:57.082 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:15:57.082 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@133 -- # pci_drivers=() 00:15:57.082 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@133 -- # local -A pci_drivers 00:15:57.082 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@135 -- # net_devs=() 00:15:57.082 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@135 -- # local -ga net_devs 00:15:57.082 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@136 -- # e810=() 00:15:57.082 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@136 -- # local -ga e810 00:15:57.082 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@137 -- # x722=() 00:15:57.082 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@137 -- # local -ga x722 00:15:57.082 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@138 -- # mlx=() 00:15:57.082 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@138 -- # local -ga mlx 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:57.083 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:57.083 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:57.083 08:13:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:57.083 Found net devices under 0000:86:00.0: cvl_0_0 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:57.083 Found net devices under 0000:86:00.1: cvl_0_1 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # is_hw=yes 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@247 -- # create_target_ns 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@27 -- # local -gA dev_map 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@28 -- # local -g _dev 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # ips=() 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772161 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:15:57.083 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:15:57.084 10.0.0.1 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772162 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:15:57.084 10.0.0.2 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@38 -- # ping_ips 1 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:15:57.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:57.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.267 ms 00:15:57.084 00:15:57.084 --- 10.0.0.1 ping statistics --- 00:15:57.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.084 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=target0 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:15:57.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:57.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:15:57.084 00:15:57.084 --- 10.0.0.2 ping statistics --- 00:15:57.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:57.084 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair++ )) 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # return 0 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=initiator0 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:15:57.084 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=initiator1 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # return 1 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev= 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@160 -- # return 0 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev target0 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=target0 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # get_net_dev target1 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # local dev=target1 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # return 1 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@159 -- # dev= 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@160 -- # return 0 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:15:57.085 ' 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # nvmfpid=1652948 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # waitforlisten 1652948 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1652948 ']' 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:57.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:57.085 [2024-11-20 08:13:10.486894] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:15:57.085 [2024-11-20 08:13:10.486946] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:57.085 [2024-11-20 08:13:10.567873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:57.085 [2024-11-20 08:13:10.608775] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:57.085 [2024-11-20 08:13:10.608811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:57.085 [2024-11-20 08:13:10.608819] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:57.085 [2024-11-20 08:13:10.608825] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:57.085 [2024-11-20 08:13:10.608830] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:57.085 [2024-11-20 08:13:10.609417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:15:57.085 [2024-11-20 08:13:10.905393] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:15:57.085 08:13:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:57.344 Malloc1 00:15:57.344 08:13:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:57.344 Malloc2 00:15:57.602 08:13:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:15:57.602 08:13:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:15:57.860 08:13:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:15:58.120 [2024-11-20 08:13:11.924020] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:15:58.120 08:13:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:15:58.120 08:13:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f4acc0e1-06b9-4f45-912d-3bc329b53d70 -a 10.0.0.2 -s 4420 -i 4 00:15:58.378 08:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:15:58.378 08:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:15:58.378 08:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:15:58.378 08:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:15:58.378 08:13:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:00.281 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:00.281 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:00.281 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:00.281 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:00.281 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:00.281 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:00.281 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:00.281 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:00.281 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:00.281 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:00.281 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:16:00.281 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:00.281 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:00.281 [ 0]:0x1 00:16:00.281 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:00.281 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:00.281 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=70838c7494844bd58d1956e59f9ebc85 00:16:00.281 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 70838c7494844bd58d1956e59f9ebc85 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:00.281 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:16:00.540 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:16:00.540 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:00.540 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:00.540 [ 0]:0x1 00:16:00.540 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:00.540 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:00.540 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=70838c7494844bd58d1956e59f9ebc85 00:16:00.540 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 70838c7494844bd58d1956e59f9ebc85 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:00.540 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:16:00.540 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:00.540 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:00.798 [ 1]:0x2 00:16:00.798 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:00.798 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:00.798 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f93f9d6d6b7a403d994b020db7847c9a 00:16:00.798 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f93f9d6d6b7a403d994b020db7847c9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:00.798 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:16:00.798 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:01.056 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:01.056 08:13:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:01.056 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:16:01.314 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:16:01.314 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f4acc0e1-06b9-4f45-912d-3bc329b53d70 -a 10.0.0.2 -s 4420 -i 4 00:16:01.572 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:16:01.572 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:01.572 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:01.572 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:16:01.572 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:16:01.572 08:13:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:03.473 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:03.473 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:03.473 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:03.473 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:03.473 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:03.473 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:03.473 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:03.473 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:03.473 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:03.473 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:03.473 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:16:03.473 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:03.473 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:03.473 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:03.473 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:03.473 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:03.473 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:03.474 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:03.474 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:03.474 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:03.474 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:03.474 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:03.732 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:03.732 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:03.732 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:03.732 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:03.732 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:03.732 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:03.732 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:16:03.732 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:03.732 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:03.732 [ 0]:0x2 00:16:03.732 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:03.732 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:03.732 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f93f9d6d6b7a403d994b020db7847c9a 00:16:03.732 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f93f9d6d6b7a403d994b020db7847c9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:03.732 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:03.990 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:16:03.990 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:03.990 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:03.990 [ 0]:0x1 00:16:03.990 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:03.990 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:03.990 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=70838c7494844bd58d1956e59f9ebc85 00:16:03.990 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 70838c7494844bd58d1956e59f9ebc85 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:03.990 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:16:03.990 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:03.990 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:03.990 [ 1]:0x2 00:16:03.990 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:03.990 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:03.990 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f93f9d6d6b7a403d994b020db7847c9a 00:16:03.990 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f93f9d6d6b7a403d994b020db7847c9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:03.990 08:13:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:04.248 [ 0]:0x2 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f93f9d6d6b7a403d994b020db7847c9a 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f93f9d6d6b7a403d994b020db7847c9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:16:04.248 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:04.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.549 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:04.549 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:16:04.549 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I f4acc0e1-06b9-4f45-912d-3bc329b53d70 -a 10.0.0.2 -s 4420 -i 4 00:16:04.815 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:04.815 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:16:04.815 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:04.815 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:04.815 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:04.815 08:13:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:16:06.753 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:06.753 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:06.753 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:06.753 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:06.753 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:06.753 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:16:06.753 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:16:06.753 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:16:07.011 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:16:07.011 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:16:07.011 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:16:07.011 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:07.011 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:07.011 [ 0]:0x1 00:16:07.011 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:07.011 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:07.011 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=70838c7494844bd58d1956e59f9ebc85 00:16:07.011 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 70838c7494844bd58d1956e59f9ebc85 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:07.011 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:16:07.011 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:07.011 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:07.011 [ 1]:0x2 00:16:07.011 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:07.011 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:07.011 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f93f9d6d6b7a403d994b020db7847c9a 00:16:07.011 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f93f9d6d6b7a403d994b020db7847c9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:07.011 08:13:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:07.271 [ 0]:0x2 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f93f9d6d6b7a403d994b020db7847c9a 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f93f9d6d6b7a403d994b020db7847c9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:07.271 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:16:07.530 [2024-11-20 08:13:21.462873] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:16:07.530 request: 00:16:07.530 { 00:16:07.530 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:07.530 "nsid": 2, 00:16:07.530 "host": "nqn.2016-06.io.spdk:host1", 00:16:07.530 "method": "nvmf_ns_remove_host", 00:16:07.530 "req_id": 1 00:16:07.530 } 00:16:07.530 Got JSON-RPC error response 00:16:07.530 response: 00:16:07.530 { 00:16:07.530 "code": -32602, 00:16:07.530 "message": "Invalid parameters" 00:16:07.530 } 00:16:07.530 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:07.530 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:07.530 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:07.530 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:07.530 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:16:07.530 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:07.530 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:16:07.530 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:16:07.530 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.530 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:16:07.530 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:07.530 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:16:07.530 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:07.530 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:16:07.530 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:16:07.530 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:07.788 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:16:07.788 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:16:07.789 [ 0]:0x2 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=f93f9d6d6b7a403d994b020db7847c9a 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ f93f9d6d6b7a403d994b020db7847c9a != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:07.789 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=1654953 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 1654953 /var/tmp/host.sock 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 1654953 ']' 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:07.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:07.789 08:13:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:08.047 [2024-11-20 08:13:21.832818] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:16:08.047 [2024-11-20 08:13:21.832864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1654953 ] 00:16:08.047 [2024-11-20 08:13:21.908859] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:08.047 [2024-11-20 08:13:21.949114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:08.305 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:08.305 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:16:08.305 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:08.563 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:08.563 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 9effce2a-ffc6-4dee-8f89-14e8af207829 00:16:08.563 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:16:08.563 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9EFFCE2AFFC64DEE8F8914E8AF207829 -i 00:16:08.821 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 325d666b-7a13-4da3-9779-46a5da2a0038 00:16:08.821 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:16:08.821 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 325D666B7A134DA3977946A5DA2A0038 -i 00:16:09.079 08:13:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:16:09.336 08:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:16:09.594 08:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:09.594 08:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:16:09.851 nvme0n1 00:16:09.851 08:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:09.851 08:13:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:16:10.416 nvme1n2 00:16:10.416 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:16:10.416 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:16:10.416 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:10.416 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:16:10.416 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:16:10.416 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:16:10.416 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:16:10.416 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:16:10.416 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:16:10.674 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 9effce2a-ffc6-4dee-8f89-14e8af207829 == \9\e\f\f\c\e\2\a\-\f\f\c\6\-\4\d\e\e\-\8\f\8\9\-\1\4\e\8\a\f\2\0\7\8\2\9 ]] 00:16:10.674 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:16:10.674 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:16:10.674 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:16:10.932 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 325d666b-7a13-4da3-9779-46a5da2a0038 == \3\2\5\d\6\6\6\b\-\7\a\1\3\-\4\d\a\3\-\9\7\7\9\-\4\6\a\5\d\a\2\a\0\0\3\8 ]] 00:16:10.932 08:13:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:16:11.190 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 9effce2a-ffc6-4dee-8f89-14e8af207829 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9EFFCE2AFFC64DEE8F8914E8AF207829 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9EFFCE2AFFC64DEE8F8914E8AF207829 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 9EFFCE2AFFC64DEE8F8914E8AF207829 00:16:11.447 [2024-11-20 08:13:25.405872] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:16:11.447 [2024-11-20 08:13:25.405902] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:16:11.447 [2024-11-20 08:13:25.405910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:16:11.447 request: 00:16:11.447 { 00:16:11.447 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:16:11.447 "namespace": { 00:16:11.447 "bdev_name": "invalid", 00:16:11.447 "nsid": 1, 00:16:11.447 "nguid": "9EFFCE2AFFC64DEE8F8914E8AF207829", 00:16:11.447 "no_auto_visible": false 00:16:11.447 }, 00:16:11.447 "method": "nvmf_subsystem_add_ns", 00:16:11.447 "req_id": 1 00:16:11.447 } 00:16:11.447 Got JSON-RPC error response 00:16:11.447 response: 00:16:11.447 { 00:16:11.447 "code": -32602, 00:16:11.447 "message": "Invalid parameters" 00:16:11.447 } 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 9effce2a-ffc6-4dee-8f89-14e8af207829 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@544 -- # tr -d - 00:16:11.447 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 9EFFCE2AFFC64DEE8F8914E8AF207829 -i 00:16:11.705 08:13:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:16:14.234 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:16:14.234 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:16:14.234 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:16:14.234 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:16:14.234 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 1654953 00:16:14.234 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1654953 ']' 00:16:14.234 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1654953 00:16:14.234 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:16:14.234 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.234 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1654953 00:16:14.234 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:16:14.234 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:16:14.234 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1654953' 00:16:14.234 killing process with pid 1654953 00:16:14.234 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1654953 00:16:14.234 08:13:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1654953 00:16:14.234 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:14.493 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:16:14.493 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:16:14.493 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:14.493 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@99 -- # sync 00:16:14.493 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:16:14.493 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # set +e 00:16:14.493 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:14.493 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:16:14.493 rmmod nvme_tcp 00:16:14.493 rmmod nvme_fabrics 00:16:14.493 rmmod nvme_keyring 00:16:14.493 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:14.493 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # set -e 00:16:14.493 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # return 0 00:16:14.493 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # '[' -n 1652948 ']' 00:16:14.493 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@337 -- # killprocess 1652948 00:16:14.493 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 1652948 ']' 00:16:14.493 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 1652948 00:16:14.493 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:16:14.493 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:14.493 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1652948 00:16:14.752 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:14.752 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:14.752 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1652948' 00:16:14.752 killing process with pid 1652948 00:16:14.752 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 1652948 00:16:14.752 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 1652948 00:16:14.752 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:14.752 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # nvmf_fini 00:16:14.752 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@254 -- # local dev 00:16:14.752 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@257 -- # remove_target_ns 00:16:14.752 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:14.752 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:14.752 08:13:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@258 -- # delete_main_bridge 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@121 -- # return 0 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # _dev=0 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # dev_map=() 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@274 -- # iptr 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@548 -- # iptables-save 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@548 -- # iptables-restore 00:16:17.287 00:16:17.287 real 0m26.724s 00:16:17.287 user 0m31.927s 00:16:17.287 sys 0m7.182s 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:16:17.287 ************************************ 00:16:17.287 END TEST nvmf_ns_masking 00:16:17.287 ************************************ 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:17.287 ************************************ 00:16:17.287 START TEST nvmf_nvme_cli 00:16:17.287 ************************************ 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:16:17.287 * Looking for test storage... 00:16:17.287 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:16:17.287 08:13:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:16:17.287 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:17.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.288 --rc genhtml_branch_coverage=1 00:16:17.288 --rc genhtml_function_coverage=1 00:16:17.288 --rc genhtml_legend=1 00:16:17.288 --rc geninfo_all_blocks=1 00:16:17.288 --rc geninfo_unexecuted_blocks=1 00:16:17.288 00:16:17.288 ' 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:17.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.288 --rc genhtml_branch_coverage=1 00:16:17.288 --rc genhtml_function_coverage=1 00:16:17.288 --rc genhtml_legend=1 00:16:17.288 --rc geninfo_all_blocks=1 00:16:17.288 --rc geninfo_unexecuted_blocks=1 00:16:17.288 00:16:17.288 ' 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:17.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.288 --rc genhtml_branch_coverage=1 00:16:17.288 --rc genhtml_function_coverage=1 00:16:17.288 --rc genhtml_legend=1 00:16:17.288 --rc geninfo_all_blocks=1 00:16:17.288 --rc geninfo_unexecuted_blocks=1 00:16:17.288 00:16:17.288 ' 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:17.288 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.288 --rc genhtml_branch_coverage=1 00:16:17.288 --rc genhtml_function_coverage=1 00:16:17.288 --rc genhtml_legend=1 00:16:17.288 --rc geninfo_all_blocks=1 00:16:17.288 --rc geninfo_unexecuted_blocks=1 00:16:17.288 00:16:17.288 ' 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@50 -- # : 0 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:16:17.288 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # have_pci_nics=0 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # prepare_net_devs 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # local -g is_hw=no 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # remove_target_ns 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # xtrace_disable 00:16:17.288 08:13:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@131 -- # pci_devs=() 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@131 -- # local -a pci_devs 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@132 -- # pci_net_devs=() 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@133 -- # pci_drivers=() 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@133 -- # local -A pci_drivers 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@135 -- # net_devs=() 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@135 -- # local -ga net_devs 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@136 -- # e810=() 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@136 -- # local -ga e810 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@137 -- # x722=() 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@137 -- # local -ga x722 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@138 -- # mlx=() 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@138 -- # local -ga mlx 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:23.854 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:23.854 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:23.854 Found net devices under 0000:86:00.0: cvl_0_0 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # [[ up == up ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:23.854 Found net devices under 0000:86:00.1: cvl_0_1 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # is_hw=yes 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@247 -- # create_target_ns 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:16:23.854 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@27 -- # local -gA dev_map 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@28 -- # local -g _dev 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@44 -- # ips=() 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@11 -- # local val=167772161 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:16:23.855 10.0.0.1 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@11 -- # local val=167772162 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:16:23.855 10.0.0.2 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:16:23.855 08:13:36 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@38 -- # ping_ips 1 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=initiator0 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:16:23.855 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:23.855 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.440 ms 00:16:23.855 00:16:23.855 --- 10.0.0.1 ping statistics --- 00:16:23.855 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.855 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:23.855 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev target0 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=target0 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:16:23.856 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:23.856 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:16:23.856 00:16:23.856 --- 10.0.0.2 ping statistics --- 00:16:23.856 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:23.856 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # (( pair++ )) 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # return 0 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=initiator0 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=initiator1 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # return 1 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev= 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@160 -- # return 0 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev target0 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=target0 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # get_net_dev target1 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # local dev=target1 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # return 1 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@159 -- # dev= 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@160 -- # return 0 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:16:23.856 ' 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # nvmfpid=1659695 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # waitforlisten 1659695 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 1659695 ']' 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:23.856 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:23.856 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:23.857 [2024-11-20 08:13:37.283623] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:16:23.857 [2024-11-20 08:13:37.283670] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:23.857 [2024-11-20 08:13:37.360642] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:23.857 [2024-11-20 08:13:37.401116] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:23.857 [2024-11-20 08:13:37.401157] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:23.857 [2024-11-20 08:13:37.401165] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:23.857 [2024-11-20 08:13:37.401172] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:23.857 [2024-11-20 08:13:37.401177] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:23.857 [2024-11-20 08:13:37.402735] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:23.857 [2024-11-20 08:13:37.402845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:23.857 [2024-11-20 08:13:37.402955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.857 [2024-11-20 08:13:37.402956] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:23.857 [2024-11-20 08:13:37.551548] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:23.857 Malloc0 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:23.857 Malloc1 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:23.857 [2024-11-20 08:13:37.645561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:16:23.857 00:16:23.857 Discovery Log Number of Records 2, Generation counter 2 00:16:23.857 =====Discovery Log Entry 0====== 00:16:23.857 trtype: tcp 00:16:23.857 adrfam: ipv4 00:16:23.857 subtype: current discovery subsystem 00:16:23.857 treq: not required 00:16:23.857 portid: 0 00:16:23.857 trsvcid: 4420 00:16:23.857 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:16:23.857 traddr: 10.0.0.2 00:16:23.857 eflags: explicit discovery connections, duplicate discovery information 00:16:23.857 sectype: none 00:16:23.857 =====Discovery Log Entry 1====== 00:16:23.857 trtype: tcp 00:16:23.857 adrfam: ipv4 00:16:23.857 subtype: nvme subsystem 00:16:23.857 treq: not required 00:16:23.857 portid: 0 00:16:23.857 trsvcid: 4420 00:16:23.857 subnqn: nqn.2016-06.io.spdk:cnode1 00:16:23.857 traddr: 10.0.0.2 00:16:23.857 eflags: none 00:16:23.857 sectype: none 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:16:23.857 08:13:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:25.228 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:16:25.228 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:16:25.228 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:25.228 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:16:25.228 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:16:25.228 08:13:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n1 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n2 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:16:27.124 /dev/nvme0n2 ]] 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n1 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n2 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:16:27.124 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:27.382 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # nvmfcleanup 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@99 -- # sync 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # set +e 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # for i in {1..20} 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:16:27.382 rmmod nvme_tcp 00:16:27.382 rmmod nvme_fabrics 00:16:27.382 rmmod nvme_keyring 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # set -e 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # return 0 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # '[' -n 1659695 ']' 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@337 -- # killprocess 1659695 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 1659695 ']' 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 1659695 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1659695 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1659695' 00:16:27.382 killing process with pid 1659695 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 1659695 00:16:27.382 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 1659695 00:16:27.640 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:16:27.640 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # nvmf_fini 00:16:27.640 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@254 -- # local dev 00:16:27.640 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@257 -- # remove_target_ns 00:16:27.640 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:16:27.640 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:16:27.640 08:13:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_target_ns 00:16:30.176 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@258 -- # delete_main_bridge 00:16:30.176 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:16:30.176 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@121 -- # return 0 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@41 -- # _dev=0 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@41 -- # dev_map=() 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@274 -- # iptr 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # iptables-save 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@548 -- # iptables-restore 00:16:30.177 00:16:30.177 real 0m12.750s 00:16:30.177 user 0m18.386s 00:16:30.177 sys 0m5.124s 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:16:30.177 ************************************ 00:16:30.177 END TEST nvmf_nvme_cli 00:16:30.177 ************************************ 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:30.177 ************************************ 00:16:30.177 START TEST nvmf_vfio_user 00:16:30.177 ************************************ 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:16:30.177 * Looking for test storage... 00:16:30.177 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:16:30.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.177 --rc genhtml_branch_coverage=1 00:16:30.177 --rc genhtml_function_coverage=1 00:16:30.177 --rc genhtml_legend=1 00:16:30.177 --rc geninfo_all_blocks=1 00:16:30.177 --rc geninfo_unexecuted_blocks=1 00:16:30.177 00:16:30.177 ' 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:16:30.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.177 --rc genhtml_branch_coverage=1 00:16:30.177 --rc genhtml_function_coverage=1 00:16:30.177 --rc genhtml_legend=1 00:16:30.177 --rc geninfo_all_blocks=1 00:16:30.177 --rc geninfo_unexecuted_blocks=1 00:16:30.177 00:16:30.177 ' 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:16:30.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.177 --rc genhtml_branch_coverage=1 00:16:30.177 --rc genhtml_function_coverage=1 00:16:30.177 --rc genhtml_legend=1 00:16:30.177 --rc geninfo_all_blocks=1 00:16:30.177 --rc geninfo_unexecuted_blocks=1 00:16:30.177 00:16:30.177 ' 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:16:30.177 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:30.177 --rc genhtml_branch_coverage=1 00:16:30.177 --rc genhtml_function_coverage=1 00:16:30.177 --rc genhtml_legend=1 00:16:30.177 --rc geninfo_all_blocks=1 00:16:30.177 --rc geninfo_unexecuted_blocks=1 00:16:30.177 00:16:30.177 ' 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:16:30.177 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@50 -- # : 0 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:16:30.178 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@54 -- # have_pci_nics=0 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1660792 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1660792' 00:16:30.178 Process pid: 1660792 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1660792 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1660792 ']' 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:30.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:30.178 08:13:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:16:30.178 [2024-11-20 08:13:43.981929] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:16:30.178 [2024-11-20 08:13:43.981978] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:30.178 [2024-11-20 08:13:44.057853] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:30.178 [2024-11-20 08:13:44.098854] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:30.178 [2024-11-20 08:13:44.098893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:30.178 [2024-11-20 08:13:44.098900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:30.178 [2024-11-20 08:13:44.098907] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:30.178 [2024-11-20 08:13:44.098915] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:30.178 [2024-11-20 08:13:44.100324] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:16:30.178 [2024-11-20 08:13:44.100435] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:16:30.178 [2024-11-20 08:13:44.100540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:16:30.178 [2024-11-20 08:13:44.100541] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:16:30.436 08:13:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:30.436 08:13:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:16:30.436 08:13:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:16:31.368 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:16:31.625 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:16:31.625 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:16:31.625 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:31.625 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:16:31.625 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:16:31.625 Malloc1 00:16:31.882 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:16:31.882 08:13:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:16:32.139 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:16:32.397 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:32.397 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:16:32.397 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:16:32.654 Malloc2 00:16:32.654 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:16:32.654 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:16:32.911 08:13:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:16:33.170 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:16:33.170 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:16:33.170 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:33.170 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:33.170 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:16:33.170 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:33.171 [2024-11-20 08:13:47.087459] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:16:33.171 [2024-11-20 08:13:47.087493] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1661443 ] 00:16:33.171 [2024-11-20 08:13:47.127702] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:16:33.171 [2024-11-20 08:13:47.136511] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:33.171 [2024-11-20 08:13:47.136533] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f1cd801d000 00:16:33.171 [2024-11-20 08:13:47.137507] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:33.171 [2024-11-20 08:13:47.138505] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:33.171 [2024-11-20 08:13:47.139509] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:33.171 [2024-11-20 08:13:47.140514] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:33.171 [2024-11-20 08:13:47.141514] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:33.171 [2024-11-20 08:13:47.142521] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:33.171 [2024-11-20 08:13:47.143531] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:33.171 [2024-11-20 08:13:47.144536] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:33.171 [2024-11-20 08:13:47.145545] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:33.171 [2024-11-20 08:13:47.145554] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f1cd8012000 00:16:33.171 [2024-11-20 08:13:47.146470] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:33.171 [2024-11-20 08:13:47.155915] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:16:33.171 [2024-11-20 08:13:47.155943] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:16:33.171 [2024-11-20 08:13:47.160640] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:33.171 [2024-11-20 08:13:47.160679] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:33.171 [2024-11-20 08:13:47.160752] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:16:33.171 [2024-11-20 08:13:47.160767] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:16:33.171 [2024-11-20 08:13:47.160773] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:16:33.171 [2024-11-20 08:13:47.161638] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:16:33.171 [2024-11-20 08:13:47.161649] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:16:33.171 [2024-11-20 08:13:47.161656] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:16:33.171 [2024-11-20 08:13:47.162646] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:16:33.171 [2024-11-20 08:13:47.162654] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:16:33.171 [2024-11-20 08:13:47.162660] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:33.171 [2024-11-20 08:13:47.163651] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:16:33.171 [2024-11-20 08:13:47.163658] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:33.171 [2024-11-20 08:13:47.164656] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:16:33.171 [2024-11-20 08:13:47.164664] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:33.171 [2024-11-20 08:13:47.164669] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:33.171 [2024-11-20 08:13:47.164675] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:33.171 [2024-11-20 08:13:47.164782] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:16:33.171 [2024-11-20 08:13:47.164787] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:33.171 [2024-11-20 08:13:47.164791] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:16:33.171 [2024-11-20 08:13:47.165669] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:16:33.171 [2024-11-20 08:13:47.166670] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:16:33.171 [2024-11-20 08:13:47.167677] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:33.171 [2024-11-20 08:13:47.168678] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:33.171 [2024-11-20 08:13:47.168752] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:33.171 [2024-11-20 08:13:47.169692] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:16:33.171 [2024-11-20 08:13:47.169700] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:33.171 [2024-11-20 08:13:47.169704] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:33.171 [2024-11-20 08:13:47.169720] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:16:33.171 [2024-11-20 08:13:47.169727] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:33.171 [2024-11-20 08:13:47.169745] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:33.171 [2024-11-20 08:13:47.169750] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:33.171 [2024-11-20 08:13:47.169754] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:33.171 [2024-11-20 08:13:47.169767] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:33.171 [2024-11-20 08:13:47.169813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:33.171 [2024-11-20 08:13:47.169822] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:16:33.171 [2024-11-20 08:13:47.169827] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:16:33.171 [2024-11-20 08:13:47.169831] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:16:33.171 [2024-11-20 08:13:47.169835] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:33.171 [2024-11-20 08:13:47.169841] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:16:33.171 [2024-11-20 08:13:47.169846] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:16:33.171 [2024-11-20 08:13:47.169850] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:16:33.171 [2024-11-20 08:13:47.169860] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:33.171 [2024-11-20 08:13:47.169869] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:33.171 [2024-11-20 08:13:47.169881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:33.171 [2024-11-20 08:13:47.169891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.172 [2024-11-20 08:13:47.169899] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.172 [2024-11-20 08:13:47.169906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.172 [2024-11-20 08:13:47.169913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:33.172 [2024-11-20 08:13:47.169918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:33.172 [2024-11-20 08:13:47.169923] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:33.172 [2024-11-20 08:13:47.169932] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:33.172 [2024-11-20 08:13:47.169941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:33.172 [2024-11-20 08:13:47.169948] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:16:33.172 [2024-11-20 08:13:47.169953] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:33.172 [2024-11-20 08:13:47.169959] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:16:33.172 [2024-11-20 08:13:47.169966] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:33.172 [2024-11-20 08:13:47.169975] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:33.172 [2024-11-20 08:13:47.169987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:33.172 [2024-11-20 08:13:47.170037] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:16:33.172 [2024-11-20 08:13:47.170044] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:33.172 [2024-11-20 08:13:47.170051] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:33.172 [2024-11-20 08:13:47.170055] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:33.172 [2024-11-20 08:13:47.170058] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:33.172 [2024-11-20 08:13:47.170064] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:33.172 [2024-11-20 08:13:47.170080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:33.172 [2024-11-20 08:13:47.170089] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:16:33.172 [2024-11-20 08:13:47.170100] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:16:33.172 [2024-11-20 08:13:47.170107] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:33.172 [2024-11-20 08:13:47.170113] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:33.172 [2024-11-20 08:13:47.170117] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:33.172 [2024-11-20 08:13:47.170120] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:33.172 [2024-11-20 08:13:47.170125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:33.172 [2024-11-20 08:13:47.170143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:33.172 [2024-11-20 08:13:47.170155] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:33.172 [2024-11-20 08:13:47.170162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:33.172 [2024-11-20 08:13:47.170168] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:33.172 [2024-11-20 08:13:47.170172] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:33.172 [2024-11-20 08:13:47.170175] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:33.172 [2024-11-20 08:13:47.170181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:33.172 [2024-11-20 08:13:47.170195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:33.172 [2024-11-20 08:13:47.170207] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:33.172 [2024-11-20 08:13:47.170215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:33.172 [2024-11-20 08:13:47.170222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:16:33.172 [2024-11-20 08:13:47.170227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:33.172 [2024-11-20 08:13:47.170232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:33.172 [2024-11-20 08:13:47.170236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:16:33.172 [2024-11-20 08:13:47.170241] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:33.172 [2024-11-20 08:13:47.170245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:16:33.172 [2024-11-20 08:13:47.170249] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:16:33.172 [2024-11-20 08:13:47.170265] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:33.172 [2024-11-20 08:13:47.170273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:33.172 [2024-11-20 08:13:47.170284] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:33.172 [2024-11-20 08:13:47.170294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:33.172 [2024-11-20 08:13:47.170303] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:33.172 [2024-11-20 08:13:47.170313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:33.172 [2024-11-20 08:13:47.170323] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:33.172 [2024-11-20 08:13:47.170335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:33.172 [2024-11-20 08:13:47.170347] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:33.172 [2024-11-20 08:13:47.170351] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:33.172 [2024-11-20 08:13:47.170354] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:33.172 [2024-11-20 08:13:47.170357] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:33.172 [2024-11-20 08:13:47.170360] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:33.172 [2024-11-20 08:13:47.170366] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:33.172 [2024-11-20 08:13:47.170372] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:33.172 [2024-11-20 08:13:47.170376] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:33.172 [2024-11-20 08:13:47.170379] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:33.173 [2024-11-20 08:13:47.170385] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:33.173 [2024-11-20 08:13:47.170392] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:33.173 [2024-11-20 08:13:47.170397] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:33.173 [2024-11-20 08:13:47.170400] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:33.173 [2024-11-20 08:13:47.170405] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:33.173 [2024-11-20 08:13:47.170412] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:33.173 [2024-11-20 08:13:47.170415] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:33.173 [2024-11-20 08:13:47.170418] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:33.173 [2024-11-20 08:13:47.170424] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:33.173 [2024-11-20 08:13:47.170430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:33.173 [2024-11-20 08:13:47.170441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:33.173 [2024-11-20 08:13:47.170451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:33.173 [2024-11-20 08:13:47.170457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:33.173 ===================================================== 00:16:33.173 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:33.173 ===================================================== 00:16:33.173 Controller Capabilities/Features 00:16:33.173 ================================ 00:16:33.173 Vendor ID: 4e58 00:16:33.173 Subsystem Vendor ID: 4e58 00:16:33.173 Serial Number: SPDK1 00:16:33.173 Model Number: SPDK bdev Controller 00:16:33.173 Firmware Version: 25.01 00:16:33.173 Recommended Arb Burst: 6 00:16:33.173 IEEE OUI Identifier: 8d 6b 50 00:16:33.173 Multi-path I/O 00:16:33.173 May have multiple subsystem ports: Yes 00:16:33.173 May have multiple controllers: Yes 00:16:33.173 Associated with SR-IOV VF: No 00:16:33.173 Max Data Transfer Size: 131072 00:16:33.173 Max Number of Namespaces: 32 00:16:33.173 Max Number of I/O Queues: 127 00:16:33.173 NVMe Specification Version (VS): 1.3 00:16:33.173 NVMe Specification Version (Identify): 1.3 00:16:33.173 Maximum Queue Entries: 256 00:16:33.173 Contiguous Queues Required: Yes 00:16:33.173 Arbitration Mechanisms Supported 00:16:33.173 Weighted Round Robin: Not Supported 00:16:33.173 Vendor Specific: Not Supported 00:16:33.173 Reset Timeout: 15000 ms 00:16:33.173 Doorbell Stride: 4 bytes 00:16:33.173 NVM Subsystem Reset: Not Supported 00:16:33.173 Command Sets Supported 00:16:33.173 NVM Command Set: Supported 00:16:33.173 Boot Partition: Not Supported 00:16:33.173 Memory Page Size Minimum: 4096 bytes 00:16:33.173 Memory Page Size Maximum: 4096 bytes 00:16:33.173 Persistent Memory Region: Not Supported 00:16:33.173 Optional Asynchronous Events Supported 00:16:33.173 Namespace Attribute Notices: Supported 00:16:33.173 Firmware Activation Notices: Not Supported 00:16:33.173 ANA Change Notices: Not Supported 00:16:33.173 PLE Aggregate Log Change Notices: Not Supported 00:16:33.173 LBA Status Info Alert Notices: Not Supported 00:16:33.173 EGE Aggregate Log Change Notices: Not Supported 00:16:33.173 Normal NVM Subsystem Shutdown event: Not Supported 00:16:33.173 Zone Descriptor Change Notices: Not Supported 00:16:33.173 Discovery Log Change Notices: Not Supported 00:16:33.173 Controller Attributes 00:16:33.173 128-bit Host Identifier: Supported 00:16:33.173 Non-Operational Permissive Mode: Not Supported 00:16:33.173 NVM Sets: Not Supported 00:16:33.173 Read Recovery Levels: Not Supported 00:16:33.173 Endurance Groups: Not Supported 00:16:33.173 Predictable Latency Mode: Not Supported 00:16:33.173 Traffic Based Keep ALive: Not Supported 00:16:33.173 Namespace Granularity: Not Supported 00:16:33.173 SQ Associations: Not Supported 00:16:33.173 UUID List: Not Supported 00:16:33.173 Multi-Domain Subsystem: Not Supported 00:16:33.173 Fixed Capacity Management: Not Supported 00:16:33.173 Variable Capacity Management: Not Supported 00:16:33.173 Delete Endurance Group: Not Supported 00:16:33.173 Delete NVM Set: Not Supported 00:16:33.173 Extended LBA Formats Supported: Not Supported 00:16:33.173 Flexible Data Placement Supported: Not Supported 00:16:33.173 00:16:33.173 Controller Memory Buffer Support 00:16:33.173 ================================ 00:16:33.173 Supported: No 00:16:33.173 00:16:33.173 Persistent Memory Region Support 00:16:33.173 ================================ 00:16:33.173 Supported: No 00:16:33.173 00:16:33.173 Admin Command Set Attributes 00:16:33.173 ============================ 00:16:33.173 Security Send/Receive: Not Supported 00:16:33.173 Format NVM: Not Supported 00:16:33.173 Firmware Activate/Download: Not Supported 00:16:33.173 Namespace Management: Not Supported 00:16:33.173 Device Self-Test: Not Supported 00:16:33.173 Directives: Not Supported 00:16:33.173 NVMe-MI: Not Supported 00:16:33.173 Virtualization Management: Not Supported 00:16:33.173 Doorbell Buffer Config: Not Supported 00:16:33.173 Get LBA Status Capability: Not Supported 00:16:33.173 Command & Feature Lockdown Capability: Not Supported 00:16:33.173 Abort Command Limit: 4 00:16:33.173 Async Event Request Limit: 4 00:16:33.173 Number of Firmware Slots: N/A 00:16:33.173 Firmware Slot 1 Read-Only: N/A 00:16:33.173 Firmware Activation Without Reset: N/A 00:16:33.173 Multiple Update Detection Support: N/A 00:16:33.173 Firmware Update Granularity: No Information Provided 00:16:33.173 Per-Namespace SMART Log: No 00:16:33.173 Asymmetric Namespace Access Log Page: Not Supported 00:16:33.173 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:16:33.173 Command Effects Log Page: Supported 00:16:33.173 Get Log Page Extended Data: Supported 00:16:33.173 Telemetry Log Pages: Not Supported 00:16:33.173 Persistent Event Log Pages: Not Supported 00:16:33.173 Supported Log Pages Log Page: May Support 00:16:33.173 Commands Supported & Effects Log Page: Not Supported 00:16:33.173 Feature Identifiers & Effects Log Page:May Support 00:16:33.173 NVMe-MI Commands & Effects Log Page: May Support 00:16:33.173 Data Area 4 for Telemetry Log: Not Supported 00:16:33.173 Error Log Page Entries Supported: 128 00:16:33.173 Keep Alive: Supported 00:16:33.173 Keep Alive Granularity: 10000 ms 00:16:33.174 00:16:33.174 NVM Command Set Attributes 00:16:33.174 ========================== 00:16:33.174 Submission Queue Entry Size 00:16:33.174 Max: 64 00:16:33.174 Min: 64 00:16:33.174 Completion Queue Entry Size 00:16:33.174 Max: 16 00:16:33.174 Min: 16 00:16:33.174 Number of Namespaces: 32 00:16:33.174 Compare Command: Supported 00:16:33.174 Write Uncorrectable Command: Not Supported 00:16:33.174 Dataset Management Command: Supported 00:16:33.174 Write Zeroes Command: Supported 00:16:33.174 Set Features Save Field: Not Supported 00:16:33.174 Reservations: Not Supported 00:16:33.174 Timestamp: Not Supported 00:16:33.174 Copy: Supported 00:16:33.174 Volatile Write Cache: Present 00:16:33.174 Atomic Write Unit (Normal): 1 00:16:33.174 Atomic Write Unit (PFail): 1 00:16:33.174 Atomic Compare & Write Unit: 1 00:16:33.174 Fused Compare & Write: Supported 00:16:33.174 Scatter-Gather List 00:16:33.174 SGL Command Set: Supported (Dword aligned) 00:16:33.174 SGL Keyed: Not Supported 00:16:33.174 SGL Bit Bucket Descriptor: Not Supported 00:16:33.174 SGL Metadata Pointer: Not Supported 00:16:33.174 Oversized SGL: Not Supported 00:16:33.174 SGL Metadata Address: Not Supported 00:16:33.174 SGL Offset: Not Supported 00:16:33.174 Transport SGL Data Block: Not Supported 00:16:33.174 Replay Protected Memory Block: Not Supported 00:16:33.174 00:16:33.174 Firmware Slot Information 00:16:33.174 ========================= 00:16:33.174 Active slot: 1 00:16:33.174 Slot 1 Firmware Revision: 25.01 00:16:33.174 00:16:33.174 00:16:33.174 Commands Supported and Effects 00:16:33.174 ============================== 00:16:33.174 Admin Commands 00:16:33.174 -------------- 00:16:33.174 Get Log Page (02h): Supported 00:16:33.174 Identify (06h): Supported 00:16:33.174 Abort (08h): Supported 00:16:33.174 Set Features (09h): Supported 00:16:33.174 Get Features (0Ah): Supported 00:16:33.174 Asynchronous Event Request (0Ch): Supported 00:16:33.174 Keep Alive (18h): Supported 00:16:33.174 I/O Commands 00:16:33.174 ------------ 00:16:33.174 Flush (00h): Supported LBA-Change 00:16:33.174 Write (01h): Supported LBA-Change 00:16:33.174 Read (02h): Supported 00:16:33.174 Compare (05h): Supported 00:16:33.174 Write Zeroes (08h): Supported LBA-Change 00:16:33.174 Dataset Management (09h): Supported LBA-Change 00:16:33.174 Copy (19h): Supported LBA-Change 00:16:33.174 00:16:33.174 Error Log 00:16:33.174 ========= 00:16:33.174 00:16:33.174 Arbitration 00:16:33.174 =========== 00:16:33.174 Arbitration Burst: 1 00:16:33.174 00:16:33.174 Power Management 00:16:33.174 ================ 00:16:33.174 Number of Power States: 1 00:16:33.174 Current Power State: Power State #0 00:16:33.174 Power State #0: 00:16:33.174 Max Power: 0.00 W 00:16:33.174 Non-Operational State: Operational 00:16:33.174 Entry Latency: Not Reported 00:16:33.174 Exit Latency: Not Reported 00:16:33.174 Relative Read Throughput: 0 00:16:33.174 Relative Read Latency: 0 00:16:33.174 Relative Write Throughput: 0 00:16:33.174 Relative Write Latency: 0 00:16:33.174 Idle Power: Not Reported 00:16:33.174 Active Power: Not Reported 00:16:33.174 Non-Operational Permissive Mode: Not Supported 00:16:33.174 00:16:33.174 Health Information 00:16:33.174 ================== 00:16:33.174 Critical Warnings: 00:16:33.174 Available Spare Space: OK 00:16:33.174 Temperature: OK 00:16:33.174 Device Reliability: OK 00:16:33.174 Read Only: No 00:16:33.174 Volatile Memory Backup: OK 00:16:33.174 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:33.174 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:33.174 Available Spare: 0% 00:16:33.174 Available Sp[2024-11-20 08:13:47.170537] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:33.174 [2024-11-20 08:13:47.170545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:33.174 [2024-11-20 08:13:47.170568] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:16:33.174 [2024-11-20 08:13:47.170576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.174 [2024-11-20 08:13:47.170582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.174 [2024-11-20 08:13:47.170588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.174 [2024-11-20 08:13:47.170593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:33.174 [2024-11-20 08:13:47.174210] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:16:33.174 [2024-11-20 08:13:47.174221] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:16:33.174 [2024-11-20 08:13:47.174720] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:33.174 [2024-11-20 08:13:47.174771] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:16:33.174 [2024-11-20 08:13:47.174777] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:16:33.174 [2024-11-20 08:13:47.175726] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:16:33.174 [2024-11-20 08:13:47.175735] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:16:33.174 [2024-11-20 08:13:47.175785] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:16:33.174 [2024-11-20 08:13:47.176758] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:33.432 are Threshold: 0% 00:16:33.432 Life Percentage Used: 0% 00:16:33.432 Data Units Read: 0 00:16:33.432 Data Units Written: 0 00:16:33.432 Host Read Commands: 0 00:16:33.432 Host Write Commands: 0 00:16:33.432 Controller Busy Time: 0 minutes 00:16:33.432 Power Cycles: 0 00:16:33.432 Power On Hours: 0 hours 00:16:33.432 Unsafe Shutdowns: 0 00:16:33.432 Unrecoverable Media Errors: 0 00:16:33.432 Lifetime Error Log Entries: 0 00:16:33.432 Warning Temperature Time: 0 minutes 00:16:33.432 Critical Temperature Time: 0 minutes 00:16:33.432 00:16:33.432 Number of Queues 00:16:33.432 ================ 00:16:33.432 Number of I/O Submission Queues: 127 00:16:33.432 Number of I/O Completion Queues: 127 00:16:33.432 00:16:33.432 Active Namespaces 00:16:33.432 ================= 00:16:33.432 Namespace ID:1 00:16:33.432 Error Recovery Timeout: Unlimited 00:16:33.433 Command Set Identifier: NVM (00h) 00:16:33.433 Deallocate: Supported 00:16:33.433 Deallocated/Unwritten Error: Not Supported 00:16:33.433 Deallocated Read Value: Unknown 00:16:33.433 Deallocate in Write Zeroes: Not Supported 00:16:33.433 Deallocated Guard Field: 0xFFFF 00:16:33.433 Flush: Supported 00:16:33.433 Reservation: Supported 00:16:33.433 Namespace Sharing Capabilities: Multiple Controllers 00:16:33.433 Size (in LBAs): 131072 (0GiB) 00:16:33.433 Capacity (in LBAs): 131072 (0GiB) 00:16:33.433 Utilization (in LBAs): 131072 (0GiB) 00:16:33.433 NGUID: 5E14E16D9FBB450A8559C3B5210D2626 00:16:33.433 UUID: 5e14e16d-9fbb-450a-8559-c3b5210d2626 00:16:33.433 Thin Provisioning: Not Supported 00:16:33.433 Per-NS Atomic Units: Yes 00:16:33.433 Atomic Boundary Size (Normal): 0 00:16:33.433 Atomic Boundary Size (PFail): 0 00:16:33.433 Atomic Boundary Offset: 0 00:16:33.433 Maximum Single Source Range Length: 65535 00:16:33.433 Maximum Copy Length: 65535 00:16:33.433 Maximum Source Range Count: 1 00:16:33.433 NGUID/EUI64 Never Reused: No 00:16:33.433 Namespace Write Protected: No 00:16:33.433 Number of LBA Formats: 1 00:16:33.433 Current LBA Format: LBA Format #00 00:16:33.433 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:33.433 00:16:33.433 08:13:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:33.433 [2024-11-20 08:13:47.400992] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:38.691 Initializing NVMe Controllers 00:16:38.691 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:38.691 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:38.691 Initialization complete. Launching workers. 00:16:38.691 ======================================================== 00:16:38.691 Latency(us) 00:16:38.691 Device Information : IOPS MiB/s Average min max 00:16:38.691 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39878.96 155.78 3209.54 930.82 8632.27 00:16:38.691 ======================================================== 00:16:38.691 Total : 39878.96 155.78 3209.54 930.82 8632.27 00:16:38.691 00:16:38.691 [2024-11-20 08:13:52.420035] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:38.691 08:13:52 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:16:38.691 [2024-11-20 08:13:52.653142] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:43.952 Initializing NVMe Controllers 00:16:43.952 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:43.952 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:16:43.952 Initialization complete. Launching workers. 00:16:43.952 ======================================================== 00:16:43.952 Latency(us) 00:16:43.952 Device Information : IOPS MiB/s Average min max 00:16:43.952 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16042.36 62.67 7978.21 4988.30 10976.12 00:16:43.952 ======================================================== 00:16:43.952 Total : 16042.36 62.67 7978.21 4988.30 10976.12 00:16:43.952 00:16:43.952 [2024-11-20 08:13:57.686874] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:43.952 08:13:57 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:16:43.952 [2024-11-20 08:13:57.900893] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:49.211 [2024-11-20 08:14:02.978523] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:49.211 Initializing NVMe Controllers 00:16:49.211 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:49.211 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:16:49.211 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:16:49.211 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:16:49.211 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:16:49.211 Initialization complete. Launching workers. 00:16:49.212 Starting thread on core 2 00:16:49.212 Starting thread on core 3 00:16:49.212 Starting thread on core 1 00:16:49.212 08:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:16:49.469 [2024-11-20 08:14:03.276652] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:52.750 [2024-11-20 08:14:06.339290] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:52.750 Initializing NVMe Controllers 00:16:52.750 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:52.750 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:52.750 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:16:52.750 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:16:52.750 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:16:52.750 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:16:52.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:16:52.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:16:52.750 Initialization complete. Launching workers. 00:16:52.750 Starting thread on core 1 with urgent priority queue 00:16:52.750 Starting thread on core 2 with urgent priority queue 00:16:52.750 Starting thread on core 3 with urgent priority queue 00:16:52.750 Starting thread on core 0 with urgent priority queue 00:16:52.750 SPDK bdev Controller (SPDK1 ) core 0: 8142.67 IO/s 12.28 secs/100000 ios 00:16:52.750 SPDK bdev Controller (SPDK1 ) core 1: 8745.00 IO/s 11.44 secs/100000 ios 00:16:52.750 SPDK bdev Controller (SPDK1 ) core 2: 8533.33 IO/s 11.72 secs/100000 ios 00:16:52.750 SPDK bdev Controller (SPDK1 ) core 3: 10459.00 IO/s 9.56 secs/100000 ios 00:16:52.750 ======================================================== 00:16:52.750 00:16:52.750 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:52.750 [2024-11-20 08:14:06.633679] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:52.750 Initializing NVMe Controllers 00:16:52.750 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:52.750 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:52.750 Namespace ID: 1 size: 0GB 00:16:52.750 Initialization complete. 00:16:52.750 INFO: using host memory buffer for IO 00:16:52.750 Hello world! 00:16:52.750 [2024-11-20 08:14:06.667903] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:52.750 08:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:16:53.008 [2024-11-20 08:14:06.943665] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:53.940 Initializing NVMe Controllers 00:16:53.940 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:53.940 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:53.940 Initialization complete. Launching workers. 00:16:53.940 submit (in ns) avg, min, max = 5814.3, 3161.9, 4002362.9 00:16:53.940 complete (in ns) avg, min, max = 18452.3, 1716.2, 4002460.0 00:16:53.940 00:16:53.940 Submit histogram 00:16:53.940 ================ 00:16:53.940 Range in us Cumulative Count 00:16:53.940 3.154 - 3.170: 0.0060% ( 1) 00:16:53.940 3.170 - 3.185: 0.0241% ( 3) 00:16:53.940 3.185 - 3.200: 0.0361% ( 2) 00:16:53.940 3.200 - 3.215: 0.1265% ( 15) 00:16:53.940 3.215 - 3.230: 1.0483% ( 153) 00:16:53.940 3.230 - 3.246: 3.6872% ( 438) 00:16:53.940 3.246 - 3.261: 7.4045% ( 617) 00:16:53.940 3.261 - 3.276: 11.7906% ( 728) 00:16:53.940 3.276 - 3.291: 16.3815% ( 762) 00:16:53.940 3.291 - 3.307: 22.1834% ( 963) 00:16:53.940 3.307 - 3.322: 28.0576% ( 975) 00:16:53.940 3.322 - 3.337: 33.9981% ( 986) 00:16:53.940 3.337 - 3.352: 40.4145% ( 1065) 00:16:53.940 3.352 - 3.368: 46.0718% ( 939) 00:16:53.940 3.368 - 3.383: 52.7774% ( 1113) 00:16:53.940 3.383 - 3.398: 60.1940% ( 1231) 00:16:53.940 3.398 - 3.413: 66.0742% ( 976) 00:16:53.940 3.413 - 3.429: 71.2134% ( 853) 00:16:53.940 3.429 - 3.444: 75.9549% ( 787) 00:16:53.940 3.444 - 3.459: 80.1844% ( 702) 00:16:53.940 3.459 - 3.474: 83.0040% ( 468) 00:16:53.940 3.474 - 3.490: 84.9500% ( 323) 00:16:53.940 3.490 - 3.505: 86.2514% ( 216) 00:16:53.940 3.505 - 3.520: 87.2756% ( 170) 00:16:53.940 3.520 - 3.535: 87.9323% ( 109) 00:16:53.940 3.535 - 3.550: 88.6071% ( 112) 00:16:53.940 3.550 - 3.566: 89.5469% ( 156) 00:16:53.940 3.566 - 3.581: 90.2880% ( 123) 00:16:53.940 3.581 - 3.596: 91.0712% ( 130) 00:16:53.940 3.596 - 3.611: 92.1135% ( 173) 00:16:53.940 3.611 - 3.627: 92.9871% ( 145) 00:16:53.940 3.627 - 3.642: 93.9993% ( 168) 00:16:53.940 3.642 - 3.657: 94.9813% ( 163) 00:16:53.940 3.657 - 3.672: 95.8730% ( 148) 00:16:53.940 3.672 - 3.688: 96.7104% ( 139) 00:16:53.940 3.688 - 3.703: 97.3973% ( 114) 00:16:53.940 3.703 - 3.718: 97.9757% ( 96) 00:16:53.940 3.718 - 3.733: 98.3612% ( 64) 00:16:53.940 3.733 - 3.749: 98.7468% ( 64) 00:16:53.940 3.749 - 3.764: 98.9878% ( 40) 00:16:53.940 3.764 - 3.779: 99.1866% ( 33) 00:16:53.940 3.779 - 3.794: 99.3734% ( 31) 00:16:53.940 3.794 - 3.810: 99.4819% ( 18) 00:16:53.940 3.810 - 3.825: 99.5722% ( 15) 00:16:53.940 3.825 - 3.840: 99.5903% ( 3) 00:16:53.940 3.840 - 3.855: 99.6204% ( 5) 00:16:53.940 3.855 - 3.870: 99.6265% ( 1) 00:16:53.940 3.870 - 3.886: 99.6385% ( 2) 00:16:53.940 3.886 - 3.901: 99.6506% ( 2) 00:16:53.940 4.145 - 4.175: 99.6566% ( 1) 00:16:53.940 5.272 - 5.303: 99.6626% ( 1) 00:16:53.940 5.699 - 5.730: 99.6686% ( 1) 00:16:53.940 5.912 - 5.943: 99.6807% ( 2) 00:16:53.940 6.004 - 6.034: 99.6867% ( 1) 00:16:53.940 6.034 - 6.065: 99.6927% ( 1) 00:16:53.940 6.065 - 6.095: 99.6988% ( 1) 00:16:53.940 6.187 - 6.217: 99.7048% ( 1) 00:16:53.940 6.370 - 6.400: 99.7108% ( 1) 00:16:53.940 6.400 - 6.430: 99.7168% ( 1) 00:16:53.940 6.522 - 6.552: 99.7289% ( 2) 00:16:53.940 6.766 - 6.796: 99.7349% ( 1) 00:16:53.940 6.827 - 6.857: 99.7409% ( 1) 00:16:53.940 6.918 - 6.949: 99.7470% ( 1) 00:16:53.940 7.010 - 7.040: 99.7530% ( 1) 00:16:53.940 7.040 - 7.070: 99.7590% ( 1) 00:16:53.940 7.101 - 7.131: 99.7711% ( 2) 00:16:53.940 7.131 - 7.162: 99.7771% ( 1) 00:16:53.940 7.162 - 7.192: 99.7831% ( 1) 00:16:53.940 7.223 - 7.253: 99.7891% ( 1) 00:16:53.940 7.284 - 7.314: 99.7952% ( 1) 00:16:53.940 7.375 - 7.406: 99.8012% ( 1) 00:16:53.940 7.497 - 7.528: 99.8072% ( 1) 00:16:53.940 7.528 - 7.558: 99.8193% ( 2) 00:16:53.940 7.619 - 7.650: 99.8253% ( 1) 00:16:53.940 7.802 - 7.863: 99.8313% ( 1) 00:16:53.940 7.985 - 8.046: 99.8373% ( 1) 00:16:53.940 8.046 - 8.107: 99.8434% ( 1) 00:16:53.940 8.229 - 8.290: 99.8494% ( 1) 00:16:53.941 8.290 - 8.350: 99.8614% ( 2) 00:16:53.941 [2024-11-20 08:14:07.963611] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:54.198 8.411 - 8.472: 99.8735% ( 2) 00:16:54.198 8.533 - 8.594: 99.8795% ( 1) 00:16:54.198 8.594 - 8.655: 99.8855% ( 1) 00:16:54.198 8.716 - 8.777: 99.8916% ( 1) 00:16:54.198 8.777 - 8.838: 99.9036% ( 2) 00:16:54.198 8.899 - 8.960: 99.9096% ( 1) 00:16:54.198 8.960 - 9.021: 99.9157% ( 1) 00:16:54.198 9.813 - 9.874: 99.9217% ( 1) 00:16:54.198 12.130 - 12.190: 99.9277% ( 1) 00:16:54.198 18.286 - 18.408: 99.9337% ( 1) 00:16:54.198 19.505 - 19.627: 99.9398% ( 1) 00:16:54.198 3994.575 - 4025.783: 100.0000% ( 10) 00:16:54.198 00:16:54.198 Complete histogram 00:16:54.198 ================== 00:16:54.198 Range in us Cumulative Count 00:16:54.198 1.714 - 1.722: 0.0241% ( 4) 00:16:54.199 1.722 - 1.730: 0.0482% ( 4) 00:16:54.199 1.730 - 1.737: 0.0542% ( 1) 00:16:54.199 1.745 - 1.752: 0.0602% ( 1) 00:16:54.199 1.752 - 1.760: 0.2530% ( 32) 00:16:54.199 1.760 - 1.768: 2.8799% ( 436) 00:16:54.199 1.768 - 1.775: 10.5193% ( 1268) 00:16:54.199 1.775 - 1.783: 17.6166% ( 1178) 00:16:54.199 1.783 - 1.790: 20.3759% ( 458) 00:16:54.199 1.790 - 1.798: 21.5508% ( 195) 00:16:54.199 1.798 - 1.806: 22.3340% ( 130) 00:16:54.199 1.806 - 1.813: 22.8702% ( 89) 00:16:54.199 1.813 - 1.821: 26.1477% ( 544) 00:16:54.199 1.821 - 1.829: 42.4931% ( 2713) 00:16:54.199 1.829 - 1.836: 68.8878% ( 4381) 00:16:54.199 1.836 - 1.844: 83.9017% ( 2492) 00:16:54.199 1.844 - 1.851: 89.9867% ( 1010) 00:16:54.199 1.851 - 1.859: 92.9751% ( 496) 00:16:54.199 1.859 - 1.867: 94.5234% ( 257) 00:16:54.199 1.867 - 1.874: 95.0295% ( 84) 00:16:54.199 1.874 - 1.882: 95.2344% ( 34) 00:16:54.199 1.882 - 1.890: 95.5115% ( 46) 00:16:54.199 1.890 - 1.897: 95.9814% ( 78) 00:16:54.199 1.897 - 1.905: 97.0117% ( 171) 00:16:54.199 1.905 - 1.912: 98.1383% ( 187) 00:16:54.199 1.912 - 1.920: 98.7770% ( 106) 00:16:54.199 1.920 - 1.928: 99.0722% ( 49) 00:16:54.199 1.928 - 1.935: 99.2228% ( 25) 00:16:54.199 1.935 - 1.943: 99.2830% ( 10) 00:16:54.199 1.943 - 1.950: 99.3132% ( 5) 00:16:54.199 1.950 - 1.966: 99.3674% ( 9) 00:16:54.199 1.966 - 1.981: 99.3734% ( 1) 00:16:54.199 1.996 - 2.011: 99.3975% ( 4) 00:16:54.199 2.011 - 2.027: 99.4035% ( 1) 00:16:54.199 2.057 - 2.072: 99.4096% ( 1) 00:16:54.199 3.368 - 3.383: 99.4156% ( 1) 00:16:54.199 3.870 - 3.886: 99.4216% ( 1) 00:16:54.199 4.724 - 4.754: 99.4276% ( 1) 00:16:54.199 4.785 - 4.815: 99.4337% ( 1) 00:16:54.199 4.968 - 4.998: 99.4397% ( 1) 00:16:54.199 4.998 - 5.029: 99.4457% ( 1) 00:16:54.199 5.303 - 5.333: 99.4517% ( 1) 00:16:54.199 5.333 - 5.364: 99.4578% ( 1) 00:16:54.199 5.364 - 5.394: 99.4638% ( 1) 00:16:54.199 5.608 - 5.638: 99.4698% ( 1) 00:16:54.199 5.730 - 5.760: 99.4819% ( 2) 00:16:54.199 5.790 - 5.821: 99.4939% ( 2) 00:16:54.199 5.973 - 6.004: 99.5060% ( 2) 00:16:54.199 6.004 - 6.034: 99.5120% ( 1) 00:16:54.199 6.278 - 6.309: 99.5180% ( 1) 00:16:54.199 6.552 - 6.583: 99.5240% ( 1) 00:16:54.199 7.558 - 7.589: 99.5301% ( 1) 00:16:54.199 9.570 - 9.630: 99.5361% ( 1) 00:16:54.199 10.362 - 10.423: 99.5421% ( 1) 00:16:54.199 11.337 - 11.398: 99.5481% ( 1) 00:16:54.199 14.019 - 14.080: 99.5542% ( 1) 00:16:54.199 17.432 - 17.554: 99.5602% ( 1) 00:16:54.199 17.554 - 17.676: 99.5722% ( 2) 00:16:54.199 38.278 - 38.522: 99.5783% ( 1) 00:16:54.199 147.261 - 148.236: 99.5843% ( 1) 00:16:54.199 3994.575 - 4025.783: 100.0000% ( 69) 00:16:54.199 00:16:54.199 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:16:54.199 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:16:54.199 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:16:54.199 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:16:54.199 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:54.199 [ 00:16:54.199 { 00:16:54.199 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:54.199 "subtype": "Discovery", 00:16:54.199 "listen_addresses": [], 00:16:54.199 "allow_any_host": true, 00:16:54.199 "hosts": [] 00:16:54.199 }, 00:16:54.199 { 00:16:54.199 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:54.199 "subtype": "NVMe", 00:16:54.199 "listen_addresses": [ 00:16:54.199 { 00:16:54.199 "trtype": "VFIOUSER", 00:16:54.199 "adrfam": "IPv4", 00:16:54.199 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:54.199 "trsvcid": "0" 00:16:54.199 } 00:16:54.199 ], 00:16:54.199 "allow_any_host": true, 00:16:54.199 "hosts": [], 00:16:54.199 "serial_number": "SPDK1", 00:16:54.199 "model_number": "SPDK bdev Controller", 00:16:54.199 "max_namespaces": 32, 00:16:54.199 "min_cntlid": 1, 00:16:54.199 "max_cntlid": 65519, 00:16:54.199 "namespaces": [ 00:16:54.199 { 00:16:54.199 "nsid": 1, 00:16:54.199 "bdev_name": "Malloc1", 00:16:54.199 "name": "Malloc1", 00:16:54.199 "nguid": "5E14E16D9FBB450A8559C3B5210D2626", 00:16:54.199 "uuid": "5e14e16d-9fbb-450a-8559-c3b5210d2626" 00:16:54.199 } 00:16:54.199 ] 00:16:54.199 }, 00:16:54.199 { 00:16:54.199 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:54.199 "subtype": "NVMe", 00:16:54.199 "listen_addresses": [ 00:16:54.199 { 00:16:54.199 "trtype": "VFIOUSER", 00:16:54.199 "adrfam": "IPv4", 00:16:54.199 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:54.199 "trsvcid": "0" 00:16:54.199 } 00:16:54.199 ], 00:16:54.199 "allow_any_host": true, 00:16:54.199 "hosts": [], 00:16:54.199 "serial_number": "SPDK2", 00:16:54.199 "model_number": "SPDK bdev Controller", 00:16:54.199 "max_namespaces": 32, 00:16:54.199 "min_cntlid": 1, 00:16:54.199 "max_cntlid": 65519, 00:16:54.199 "namespaces": [ 00:16:54.199 { 00:16:54.199 "nsid": 1, 00:16:54.199 "bdev_name": "Malloc2", 00:16:54.199 "name": "Malloc2", 00:16:54.199 "nguid": "CF9A85CB24A742008B8FF7F9754BFE15", 00:16:54.199 "uuid": "cf9a85cb-24a7-4200-8b8f-f7f9754bfe15" 00:16:54.199 } 00:16:54.199 ] 00:16:54.199 } 00:16:54.199 ] 00:16:54.199 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:16:54.199 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1664932 00:16:54.199 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:16:54.199 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:16:54.199 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:16:54.199 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:54.199 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:16:54.199 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:16:54.199 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:16:54.199 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:16:54.456 [2024-11-20 08:14:08.374572] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:16:54.456 Malloc3 00:16:54.456 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:16:54.714 [2024-11-20 08:14:08.610346] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:16:54.714 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:16:54.714 Asynchronous Event Request test 00:16:54.714 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:16:54.714 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:16:54.714 Registering asynchronous event callbacks... 00:16:54.714 Starting namespace attribute notice tests for all controllers... 00:16:54.714 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:16:54.714 aer_cb - Changed Namespace 00:16:54.714 Cleaning up... 00:16:54.973 [ 00:16:54.973 { 00:16:54.973 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:16:54.973 "subtype": "Discovery", 00:16:54.973 "listen_addresses": [], 00:16:54.973 "allow_any_host": true, 00:16:54.973 "hosts": [] 00:16:54.973 }, 00:16:54.973 { 00:16:54.973 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:16:54.973 "subtype": "NVMe", 00:16:54.973 "listen_addresses": [ 00:16:54.973 { 00:16:54.973 "trtype": "VFIOUSER", 00:16:54.973 "adrfam": "IPv4", 00:16:54.973 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:16:54.973 "trsvcid": "0" 00:16:54.973 } 00:16:54.973 ], 00:16:54.973 "allow_any_host": true, 00:16:54.973 "hosts": [], 00:16:54.973 "serial_number": "SPDK1", 00:16:54.973 "model_number": "SPDK bdev Controller", 00:16:54.973 "max_namespaces": 32, 00:16:54.973 "min_cntlid": 1, 00:16:54.973 "max_cntlid": 65519, 00:16:54.973 "namespaces": [ 00:16:54.973 { 00:16:54.973 "nsid": 1, 00:16:54.973 "bdev_name": "Malloc1", 00:16:54.973 "name": "Malloc1", 00:16:54.973 "nguid": "5E14E16D9FBB450A8559C3B5210D2626", 00:16:54.973 "uuid": "5e14e16d-9fbb-450a-8559-c3b5210d2626" 00:16:54.973 }, 00:16:54.973 { 00:16:54.973 "nsid": 2, 00:16:54.973 "bdev_name": "Malloc3", 00:16:54.973 "name": "Malloc3", 00:16:54.973 "nguid": "102D6114430742AA88F6A0780C19EC78", 00:16:54.973 "uuid": "102d6114-4307-42aa-88f6-a0780c19ec78" 00:16:54.973 } 00:16:54.973 ] 00:16:54.973 }, 00:16:54.973 { 00:16:54.973 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:16:54.973 "subtype": "NVMe", 00:16:54.973 "listen_addresses": [ 00:16:54.973 { 00:16:54.973 "trtype": "VFIOUSER", 00:16:54.973 "adrfam": "IPv4", 00:16:54.973 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:16:54.973 "trsvcid": "0" 00:16:54.973 } 00:16:54.973 ], 00:16:54.973 "allow_any_host": true, 00:16:54.973 "hosts": [], 00:16:54.973 "serial_number": "SPDK2", 00:16:54.973 "model_number": "SPDK bdev Controller", 00:16:54.973 "max_namespaces": 32, 00:16:54.973 "min_cntlid": 1, 00:16:54.973 "max_cntlid": 65519, 00:16:54.973 "namespaces": [ 00:16:54.973 { 00:16:54.973 "nsid": 1, 00:16:54.973 "bdev_name": "Malloc2", 00:16:54.973 "name": "Malloc2", 00:16:54.973 "nguid": "CF9A85CB24A742008B8FF7F9754BFE15", 00:16:54.973 "uuid": "cf9a85cb-24a7-4200-8b8f-f7f9754bfe15" 00:16:54.973 } 00:16:54.973 ] 00:16:54.973 } 00:16:54.973 ] 00:16:54.973 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1664932 00:16:54.973 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:16:54.973 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:16:54.973 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:16:54.974 08:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:16:54.974 [2024-11-20 08:14:08.840389] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:16:54.974 [2024-11-20 08:14:08.840419] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1664951 ] 00:16:54.974 [2024-11-20 08:14:08.877548] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:16:54.974 [2024-11-20 08:14:08.882810] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:54.974 [2024-11-20 08:14:08.882832] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f7e14de9000 00:16:54.974 [2024-11-20 08:14:08.883814] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:54.974 [2024-11-20 08:14:08.884821] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:54.974 [2024-11-20 08:14:08.885836] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:54.974 [2024-11-20 08:14:08.886842] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:54.974 [2024-11-20 08:14:08.887848] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:54.974 [2024-11-20 08:14:08.888859] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:54.974 [2024-11-20 08:14:08.889865] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:16:54.974 [2024-11-20 08:14:08.890875] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:16:54.974 [2024-11-20 08:14:08.891885] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:16:54.974 [2024-11-20 08:14:08.891895] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f7e14dde000 00:16:54.974 [2024-11-20 08:14:08.892810] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:54.974 [2024-11-20 08:14:08.902165] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:16:54.974 [2024-11-20 08:14:08.902189] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:16:54.974 [2024-11-20 08:14:08.907275] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:54.974 [2024-11-20 08:14:08.907314] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:16:54.974 [2024-11-20 08:14:08.907382] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:16:54.974 [2024-11-20 08:14:08.907395] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:16:54.974 [2024-11-20 08:14:08.907400] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:16:54.974 [2024-11-20 08:14:08.908277] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:16:54.974 [2024-11-20 08:14:08.908286] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:16:54.974 [2024-11-20 08:14:08.908293] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:16:54.974 [2024-11-20 08:14:08.909280] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:16:54.974 [2024-11-20 08:14:08.909290] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:16:54.974 [2024-11-20 08:14:08.909299] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:16:54.974 [2024-11-20 08:14:08.910294] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:16:54.974 [2024-11-20 08:14:08.910303] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:16:54.974 [2024-11-20 08:14:08.911301] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:16:54.974 [2024-11-20 08:14:08.911309] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:16:54.974 [2024-11-20 08:14:08.911314] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:16:54.974 [2024-11-20 08:14:08.911320] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:16:54.974 [2024-11-20 08:14:08.911427] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:16:54.974 [2024-11-20 08:14:08.911432] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:16:54.974 [2024-11-20 08:14:08.911436] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:16:54.974 [2024-11-20 08:14:08.912308] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:16:54.974 [2024-11-20 08:14:08.913313] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:16:54.974 [2024-11-20 08:14:08.914316] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:54.974 [2024-11-20 08:14:08.915319] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:16:54.974 [2024-11-20 08:14:08.915358] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:16:54.974 [2024-11-20 08:14:08.916329] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:16:54.974 [2024-11-20 08:14:08.916337] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:16:54.974 [2024-11-20 08:14:08.916342] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:16:54.974 [2024-11-20 08:14:08.916359] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:16:54.974 [2024-11-20 08:14:08.916367] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:16:54.974 [2024-11-20 08:14:08.916378] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:54.974 [2024-11-20 08:14:08.916383] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:54.974 [2024-11-20 08:14:08.916386] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:54.974 [2024-11-20 08:14:08.916398] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:54.974 [2024-11-20 08:14:08.925212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:16:54.974 [2024-11-20 08:14:08.925223] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:16:54.974 [2024-11-20 08:14:08.925228] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:16:54.974 [2024-11-20 08:14:08.925231] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:16:54.974 [2024-11-20 08:14:08.925236] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:16:54.974 [2024-11-20 08:14:08.925243] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:16:54.974 [2024-11-20 08:14:08.925248] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:16:54.974 [2024-11-20 08:14:08.925252] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:16:54.974 [2024-11-20 08:14:08.925260] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:16:54.974 [2024-11-20 08:14:08.925270] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:16:54.974 [2024-11-20 08:14:08.933208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:16:54.974 [2024-11-20 08:14:08.933219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.974 [2024-11-20 08:14:08.933227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.974 [2024-11-20 08:14:08.933234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.974 [2024-11-20 08:14:08.933241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:16:54.974 [2024-11-20 08:14:08.933245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:16:54.974 [2024-11-20 08:14:08.933251] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:16:54.974 [2024-11-20 08:14:08.933259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:16:54.974 [2024-11-20 08:14:08.941208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:16:54.974 [2024-11-20 08:14:08.941217] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:16:54.974 [2024-11-20 08:14:08.941222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:16:54.974 [2024-11-20 08:14:08.941228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:16:54.974 [2024-11-20 08:14:08.941234] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:16:54.975 [2024-11-20 08:14:08.941242] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:54.975 [2024-11-20 08:14:08.949216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:16:54.975 [2024-11-20 08:14:08.949273] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:16:54.975 [2024-11-20 08:14:08.949280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:16:54.975 [2024-11-20 08:14:08.949287] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:16:54.975 [2024-11-20 08:14:08.949291] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:16:54.975 [2024-11-20 08:14:08.949294] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:54.975 [2024-11-20 08:14:08.949300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:16:54.975 [2024-11-20 08:14:08.957206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:16:54.975 [2024-11-20 08:14:08.957216] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:16:54.975 [2024-11-20 08:14:08.957228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:16:54.975 [2024-11-20 08:14:08.957235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:16:54.975 [2024-11-20 08:14:08.957241] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:54.975 [2024-11-20 08:14:08.957245] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:54.975 [2024-11-20 08:14:08.957248] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:54.975 [2024-11-20 08:14:08.957254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:54.975 [2024-11-20 08:14:08.965206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:16:54.975 [2024-11-20 08:14:08.965219] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:16:54.975 [2024-11-20 08:14:08.965226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:16:54.975 [2024-11-20 08:14:08.965233] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:16:54.975 [2024-11-20 08:14:08.965237] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:54.975 [2024-11-20 08:14:08.965240] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:54.975 [2024-11-20 08:14:08.965246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:54.975 [2024-11-20 08:14:08.973208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:16:54.975 [2024-11-20 08:14:08.973216] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:16:54.975 [2024-11-20 08:14:08.973222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:16:54.975 [2024-11-20 08:14:08.973229] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:16:54.975 [2024-11-20 08:14:08.973235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:16:54.975 [2024-11-20 08:14:08.973241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:16:54.975 [2024-11-20 08:14:08.973246] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:16:54.975 [2024-11-20 08:14:08.973250] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:16:54.975 [2024-11-20 08:14:08.973254] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:16:54.975 [2024-11-20 08:14:08.973259] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:16:54.975 [2024-11-20 08:14:08.973274] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:16:54.975 [2024-11-20 08:14:08.981208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:16:54.975 [2024-11-20 08:14:08.981221] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:16:54.975 [2024-11-20 08:14:08.989208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:16:54.975 [2024-11-20 08:14:08.989220] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:16:55.234 [2024-11-20 08:14:08.997207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:16:55.234 [2024-11-20 08:14:08.997219] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:16:55.234 [2024-11-20 08:14:09.005206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:16:55.234 [2024-11-20 08:14:09.005221] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:16:55.234 [2024-11-20 08:14:09.005225] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:16:55.234 [2024-11-20 08:14:09.005228] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:16:55.234 [2024-11-20 08:14:09.005231] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:16:55.234 [2024-11-20 08:14:09.005234] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:16:55.234 [2024-11-20 08:14:09.005240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:16:55.234 [2024-11-20 08:14:09.005247] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:16:55.234 [2024-11-20 08:14:09.005251] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:16:55.234 [2024-11-20 08:14:09.005254] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:55.234 [2024-11-20 08:14:09.005259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:16:55.234 [2024-11-20 08:14:09.005265] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:16:55.234 [2024-11-20 08:14:09.005269] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:16:55.234 [2024-11-20 08:14:09.005272] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:55.234 [2024-11-20 08:14:09.005277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:16:55.234 [2024-11-20 08:14:09.005286] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:16:55.234 [2024-11-20 08:14:09.005290] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:16:55.234 [2024-11-20 08:14:09.005293] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:16:55.234 [2024-11-20 08:14:09.005298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:16:55.234 [2024-11-20 08:14:09.013210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:16:55.234 [2024-11-20 08:14:09.013224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:16:55.234 [2024-11-20 08:14:09.013233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:16:55.234 [2024-11-20 08:14:09.013239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:16:55.234 ===================================================== 00:16:55.234 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:16:55.234 ===================================================== 00:16:55.234 Controller Capabilities/Features 00:16:55.234 ================================ 00:16:55.234 Vendor ID: 4e58 00:16:55.234 Subsystem Vendor ID: 4e58 00:16:55.234 Serial Number: SPDK2 00:16:55.234 Model Number: SPDK bdev Controller 00:16:55.234 Firmware Version: 25.01 00:16:55.234 Recommended Arb Burst: 6 00:16:55.234 IEEE OUI Identifier: 8d 6b 50 00:16:55.234 Multi-path I/O 00:16:55.234 May have multiple subsystem ports: Yes 00:16:55.234 May have multiple controllers: Yes 00:16:55.234 Associated with SR-IOV VF: No 00:16:55.234 Max Data Transfer Size: 131072 00:16:55.234 Max Number of Namespaces: 32 00:16:55.234 Max Number of I/O Queues: 127 00:16:55.234 NVMe Specification Version (VS): 1.3 00:16:55.234 NVMe Specification Version (Identify): 1.3 00:16:55.234 Maximum Queue Entries: 256 00:16:55.234 Contiguous Queues Required: Yes 00:16:55.234 Arbitration Mechanisms Supported 00:16:55.234 Weighted Round Robin: Not Supported 00:16:55.234 Vendor Specific: Not Supported 00:16:55.234 Reset Timeout: 15000 ms 00:16:55.234 Doorbell Stride: 4 bytes 00:16:55.234 NVM Subsystem Reset: Not Supported 00:16:55.234 Command Sets Supported 00:16:55.234 NVM Command Set: Supported 00:16:55.234 Boot Partition: Not Supported 00:16:55.234 Memory Page Size Minimum: 4096 bytes 00:16:55.234 Memory Page Size Maximum: 4096 bytes 00:16:55.234 Persistent Memory Region: Not Supported 00:16:55.234 Optional Asynchronous Events Supported 00:16:55.234 Namespace Attribute Notices: Supported 00:16:55.234 Firmware Activation Notices: Not Supported 00:16:55.234 ANA Change Notices: Not Supported 00:16:55.234 PLE Aggregate Log Change Notices: Not Supported 00:16:55.234 LBA Status Info Alert Notices: Not Supported 00:16:55.234 EGE Aggregate Log Change Notices: Not Supported 00:16:55.234 Normal NVM Subsystem Shutdown event: Not Supported 00:16:55.234 Zone Descriptor Change Notices: Not Supported 00:16:55.234 Discovery Log Change Notices: Not Supported 00:16:55.234 Controller Attributes 00:16:55.234 128-bit Host Identifier: Supported 00:16:55.234 Non-Operational Permissive Mode: Not Supported 00:16:55.234 NVM Sets: Not Supported 00:16:55.234 Read Recovery Levels: Not Supported 00:16:55.234 Endurance Groups: Not Supported 00:16:55.234 Predictable Latency Mode: Not Supported 00:16:55.234 Traffic Based Keep ALive: Not Supported 00:16:55.234 Namespace Granularity: Not Supported 00:16:55.234 SQ Associations: Not Supported 00:16:55.234 UUID List: Not Supported 00:16:55.234 Multi-Domain Subsystem: Not Supported 00:16:55.234 Fixed Capacity Management: Not Supported 00:16:55.234 Variable Capacity Management: Not Supported 00:16:55.234 Delete Endurance Group: Not Supported 00:16:55.234 Delete NVM Set: Not Supported 00:16:55.234 Extended LBA Formats Supported: Not Supported 00:16:55.234 Flexible Data Placement Supported: Not Supported 00:16:55.234 00:16:55.234 Controller Memory Buffer Support 00:16:55.234 ================================ 00:16:55.234 Supported: No 00:16:55.234 00:16:55.234 Persistent Memory Region Support 00:16:55.234 ================================ 00:16:55.234 Supported: No 00:16:55.234 00:16:55.234 Admin Command Set Attributes 00:16:55.234 ============================ 00:16:55.234 Security Send/Receive: Not Supported 00:16:55.234 Format NVM: Not Supported 00:16:55.234 Firmware Activate/Download: Not Supported 00:16:55.234 Namespace Management: Not Supported 00:16:55.234 Device Self-Test: Not Supported 00:16:55.234 Directives: Not Supported 00:16:55.234 NVMe-MI: Not Supported 00:16:55.234 Virtualization Management: Not Supported 00:16:55.234 Doorbell Buffer Config: Not Supported 00:16:55.234 Get LBA Status Capability: Not Supported 00:16:55.234 Command & Feature Lockdown Capability: Not Supported 00:16:55.234 Abort Command Limit: 4 00:16:55.234 Async Event Request Limit: 4 00:16:55.234 Number of Firmware Slots: N/A 00:16:55.235 Firmware Slot 1 Read-Only: N/A 00:16:55.235 Firmware Activation Without Reset: N/A 00:16:55.235 Multiple Update Detection Support: N/A 00:16:55.235 Firmware Update Granularity: No Information Provided 00:16:55.235 Per-Namespace SMART Log: No 00:16:55.235 Asymmetric Namespace Access Log Page: Not Supported 00:16:55.235 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:16:55.235 Command Effects Log Page: Supported 00:16:55.235 Get Log Page Extended Data: Supported 00:16:55.235 Telemetry Log Pages: Not Supported 00:16:55.235 Persistent Event Log Pages: Not Supported 00:16:55.235 Supported Log Pages Log Page: May Support 00:16:55.235 Commands Supported & Effects Log Page: Not Supported 00:16:55.235 Feature Identifiers & Effects Log Page:May Support 00:16:55.235 NVMe-MI Commands & Effects Log Page: May Support 00:16:55.235 Data Area 4 for Telemetry Log: Not Supported 00:16:55.235 Error Log Page Entries Supported: 128 00:16:55.235 Keep Alive: Supported 00:16:55.235 Keep Alive Granularity: 10000 ms 00:16:55.235 00:16:55.235 NVM Command Set Attributes 00:16:55.235 ========================== 00:16:55.235 Submission Queue Entry Size 00:16:55.235 Max: 64 00:16:55.235 Min: 64 00:16:55.235 Completion Queue Entry Size 00:16:55.235 Max: 16 00:16:55.235 Min: 16 00:16:55.235 Number of Namespaces: 32 00:16:55.235 Compare Command: Supported 00:16:55.235 Write Uncorrectable Command: Not Supported 00:16:55.235 Dataset Management Command: Supported 00:16:55.235 Write Zeroes Command: Supported 00:16:55.235 Set Features Save Field: Not Supported 00:16:55.235 Reservations: Not Supported 00:16:55.235 Timestamp: Not Supported 00:16:55.235 Copy: Supported 00:16:55.235 Volatile Write Cache: Present 00:16:55.235 Atomic Write Unit (Normal): 1 00:16:55.235 Atomic Write Unit (PFail): 1 00:16:55.235 Atomic Compare & Write Unit: 1 00:16:55.235 Fused Compare & Write: Supported 00:16:55.235 Scatter-Gather List 00:16:55.235 SGL Command Set: Supported (Dword aligned) 00:16:55.235 SGL Keyed: Not Supported 00:16:55.235 SGL Bit Bucket Descriptor: Not Supported 00:16:55.235 SGL Metadata Pointer: Not Supported 00:16:55.235 Oversized SGL: Not Supported 00:16:55.235 SGL Metadata Address: Not Supported 00:16:55.235 SGL Offset: Not Supported 00:16:55.235 Transport SGL Data Block: Not Supported 00:16:55.235 Replay Protected Memory Block: Not Supported 00:16:55.235 00:16:55.235 Firmware Slot Information 00:16:55.235 ========================= 00:16:55.235 Active slot: 1 00:16:55.235 Slot 1 Firmware Revision: 25.01 00:16:55.235 00:16:55.235 00:16:55.235 Commands Supported and Effects 00:16:55.235 ============================== 00:16:55.235 Admin Commands 00:16:55.235 -------------- 00:16:55.235 Get Log Page (02h): Supported 00:16:55.235 Identify (06h): Supported 00:16:55.235 Abort (08h): Supported 00:16:55.235 Set Features (09h): Supported 00:16:55.235 Get Features (0Ah): Supported 00:16:55.235 Asynchronous Event Request (0Ch): Supported 00:16:55.235 Keep Alive (18h): Supported 00:16:55.235 I/O Commands 00:16:55.235 ------------ 00:16:55.235 Flush (00h): Supported LBA-Change 00:16:55.235 Write (01h): Supported LBA-Change 00:16:55.235 Read (02h): Supported 00:16:55.235 Compare (05h): Supported 00:16:55.235 Write Zeroes (08h): Supported LBA-Change 00:16:55.235 Dataset Management (09h): Supported LBA-Change 00:16:55.235 Copy (19h): Supported LBA-Change 00:16:55.235 00:16:55.235 Error Log 00:16:55.235 ========= 00:16:55.235 00:16:55.235 Arbitration 00:16:55.235 =========== 00:16:55.235 Arbitration Burst: 1 00:16:55.235 00:16:55.235 Power Management 00:16:55.235 ================ 00:16:55.235 Number of Power States: 1 00:16:55.235 Current Power State: Power State #0 00:16:55.235 Power State #0: 00:16:55.235 Max Power: 0.00 W 00:16:55.235 Non-Operational State: Operational 00:16:55.235 Entry Latency: Not Reported 00:16:55.235 Exit Latency: Not Reported 00:16:55.235 Relative Read Throughput: 0 00:16:55.235 Relative Read Latency: 0 00:16:55.235 Relative Write Throughput: 0 00:16:55.235 Relative Write Latency: 0 00:16:55.235 Idle Power: Not Reported 00:16:55.235 Active Power: Not Reported 00:16:55.235 Non-Operational Permissive Mode: Not Supported 00:16:55.235 00:16:55.235 Health Information 00:16:55.235 ================== 00:16:55.235 Critical Warnings: 00:16:55.235 Available Spare Space: OK 00:16:55.235 Temperature: OK 00:16:55.235 Device Reliability: OK 00:16:55.235 Read Only: No 00:16:55.235 Volatile Memory Backup: OK 00:16:55.235 Current Temperature: 0 Kelvin (-273 Celsius) 00:16:55.235 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:16:55.235 Available Spare: 0% 00:16:55.235 Available Sp[2024-11-20 08:14:09.013322] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:16:55.235 [2024-11-20 08:14:09.021207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:16:55.235 [2024-11-20 08:14:09.021234] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:16:55.235 [2024-11-20 08:14:09.021242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.235 [2024-11-20 08:14:09.021248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.235 [2024-11-20 08:14:09.021253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.235 [2024-11-20 08:14:09.021259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:55.235 [2024-11-20 08:14:09.021312] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:16:55.235 [2024-11-20 08:14:09.021323] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:16:55.235 [2024-11-20 08:14:09.022319] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:16:55.235 [2024-11-20 08:14:09.022364] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:16:55.235 [2024-11-20 08:14:09.022370] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:16:55.235 [2024-11-20 08:14:09.023321] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:16:55.235 [2024-11-20 08:14:09.023332] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:16:55.235 [2024-11-20 08:14:09.023378] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:16:55.235 [2024-11-20 08:14:09.024342] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:16:55.235 are Threshold: 0% 00:16:55.235 Life Percentage Used: 0% 00:16:55.235 Data Units Read: 0 00:16:55.235 Data Units Written: 0 00:16:55.235 Host Read Commands: 0 00:16:55.235 Host Write Commands: 0 00:16:55.235 Controller Busy Time: 0 minutes 00:16:55.235 Power Cycles: 0 00:16:55.235 Power On Hours: 0 hours 00:16:55.235 Unsafe Shutdowns: 0 00:16:55.235 Unrecoverable Media Errors: 0 00:16:55.235 Lifetime Error Log Entries: 0 00:16:55.235 Warning Temperature Time: 0 minutes 00:16:55.235 Critical Temperature Time: 0 minutes 00:16:55.235 00:16:55.235 Number of Queues 00:16:55.235 ================ 00:16:55.235 Number of I/O Submission Queues: 127 00:16:55.235 Number of I/O Completion Queues: 127 00:16:55.235 00:16:55.235 Active Namespaces 00:16:55.235 ================= 00:16:55.235 Namespace ID:1 00:16:55.235 Error Recovery Timeout: Unlimited 00:16:55.235 Command Set Identifier: NVM (00h) 00:16:55.235 Deallocate: Supported 00:16:55.235 Deallocated/Unwritten Error: Not Supported 00:16:55.235 Deallocated Read Value: Unknown 00:16:55.235 Deallocate in Write Zeroes: Not Supported 00:16:55.235 Deallocated Guard Field: 0xFFFF 00:16:55.235 Flush: Supported 00:16:55.235 Reservation: Supported 00:16:55.235 Namespace Sharing Capabilities: Multiple Controllers 00:16:55.235 Size (in LBAs): 131072 (0GiB) 00:16:55.235 Capacity (in LBAs): 131072 (0GiB) 00:16:55.235 Utilization (in LBAs): 131072 (0GiB) 00:16:55.235 NGUID: CF9A85CB24A742008B8FF7F9754BFE15 00:16:55.235 UUID: cf9a85cb-24a7-4200-8b8f-f7f9754bfe15 00:16:55.235 Thin Provisioning: Not Supported 00:16:55.235 Per-NS Atomic Units: Yes 00:16:55.235 Atomic Boundary Size (Normal): 0 00:16:55.235 Atomic Boundary Size (PFail): 0 00:16:55.235 Atomic Boundary Offset: 0 00:16:55.235 Maximum Single Source Range Length: 65535 00:16:55.235 Maximum Copy Length: 65535 00:16:55.235 Maximum Source Range Count: 1 00:16:55.235 NGUID/EUI64 Never Reused: No 00:16:55.235 Namespace Write Protected: No 00:16:55.235 Number of LBA Formats: 1 00:16:55.236 Current LBA Format: LBA Format #00 00:16:55.236 LBA Format #00: Data Size: 512 Metadata Size: 0 00:16:55.236 00:16:55.236 08:14:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:16:55.236 [2024-11-20 08:14:09.243572] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:00.495 Initializing NVMe Controllers 00:17:00.495 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:00.495 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:00.495 Initialization complete. Launching workers. 00:17:00.495 ======================================================== 00:17:00.495 Latency(us) 00:17:00.495 Device Information : IOPS MiB/s Average min max 00:17:00.495 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39913.46 155.91 3206.54 939.06 8631.96 00:17:00.495 ======================================================== 00:17:00.495 Total : 39913.46 155.91 3206.54 939.06 8631.96 00:17:00.495 00:17:00.495 [2024-11-20 08:14:14.353462] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:00.495 08:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:17:00.753 [2024-11-20 08:14:14.596229] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:06.042 Initializing NVMe Controllers 00:17:06.042 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:06.042 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:17:06.042 Initialization complete. Launching workers. 00:17:06.042 ======================================================== 00:17:06.042 Latency(us) 00:17:06.042 Device Information : IOPS MiB/s Average min max 00:17:06.042 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39968.71 156.13 3202.11 957.75 7086.41 00:17:06.042 ======================================================== 00:17:06.042 Total : 39968.71 156.13 3202.11 957.75 7086.41 00:17:06.042 00:17:06.042 [2024-11-20 08:14:19.619045] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:06.042 08:14:19 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:17:06.042 [2024-11-20 08:14:19.819300] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:11.369 [2024-11-20 08:14:24.958308] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:11.369 Initializing NVMe Controllers 00:17:11.369 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:11.369 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:17:11.369 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:17:11.369 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:17:11.369 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:17:11.369 Initialization complete. Launching workers. 00:17:11.369 Starting thread on core 2 00:17:11.369 Starting thread on core 3 00:17:11.369 Starting thread on core 1 00:17:11.369 08:14:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:17:11.369 [2024-11-20 08:14:25.258605] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:14.651 [2024-11-20 08:14:28.318384] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:14.651 Initializing NVMe Controllers 00:17:14.651 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:14.651 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:14.651 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:17:14.651 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:17:14.651 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:17:14.651 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:17:14.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:17:14.651 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:17:14.651 Initialization complete. Launching workers. 00:17:14.651 Starting thread on core 1 with urgent priority queue 00:17:14.651 Starting thread on core 2 with urgent priority queue 00:17:14.651 Starting thread on core 3 with urgent priority queue 00:17:14.651 Starting thread on core 0 with urgent priority queue 00:17:14.651 SPDK bdev Controller (SPDK2 ) core 0: 9578.00 IO/s 10.44 secs/100000 ios 00:17:14.651 SPDK bdev Controller (SPDK2 ) core 1: 7902.33 IO/s 12.65 secs/100000 ios 00:17:14.651 SPDK bdev Controller (SPDK2 ) core 2: 12237.33 IO/s 8.17 secs/100000 ios 00:17:14.651 SPDK bdev Controller (SPDK2 ) core 3: 7779.67 IO/s 12.85 secs/100000 ios 00:17:14.651 ======================================================== 00:17:14.651 00:17:14.651 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:14.651 [2024-11-20 08:14:28.608643] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:14.651 Initializing NVMe Controllers 00:17:14.651 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:14.651 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:14.651 Namespace ID: 1 size: 0GB 00:17:14.651 Initialization complete. 00:17:14.651 INFO: using host memory buffer for IO 00:17:14.651 Hello world! 00:17:14.651 [2024-11-20 08:14:28.618708] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:14.651 08:14:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:17:14.908 [2024-11-20 08:14:28.898968] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:16.281 Initializing NVMe Controllers 00:17:16.281 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:16.281 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:16.281 Initialization complete. Launching workers. 00:17:16.281 submit (in ns) avg, min, max = 5106.7, 3190.5, 3999163.8 00:17:16.281 complete (in ns) avg, min, max = 21886.8, 1710.5, 4995204.8 00:17:16.281 00:17:16.281 Submit histogram 00:17:16.281 ================ 00:17:16.281 Range in us Cumulative Count 00:17:16.281 3.185 - 3.200: 0.0121% ( 2) 00:17:16.281 3.200 - 3.215: 0.0970% ( 14) 00:17:16.281 3.215 - 3.230: 0.2667% ( 28) 00:17:16.281 3.230 - 3.246: 0.9031% ( 105) 00:17:16.281 3.246 - 3.261: 2.4912% ( 262) 00:17:16.281 3.261 - 3.276: 7.3221% ( 797) 00:17:16.281 3.276 - 3.291: 13.4137% ( 1005) 00:17:16.281 3.291 - 3.307: 19.8630% ( 1064) 00:17:16.281 3.307 - 3.322: 26.7063% ( 1129) 00:17:16.281 3.322 - 3.337: 33.0525% ( 1047) 00:17:16.281 3.337 - 3.352: 38.5138% ( 901) 00:17:16.281 3.352 - 3.368: 44.5812% ( 1001) 00:17:16.281 3.368 - 3.383: 50.7516% ( 1018) 00:17:16.281 3.383 - 3.398: 56.0492% ( 874) 00:17:16.281 3.398 - 3.413: 61.2923% ( 865) 00:17:16.281 3.413 - 3.429: 68.7659% ( 1233) 00:17:16.281 3.429 - 3.444: 74.0999% ( 880) 00:17:16.281 3.444 - 3.459: 78.3610% ( 703) 00:17:16.281 3.459 - 3.474: 82.8646% ( 743) 00:17:16.281 3.474 - 3.490: 85.2346% ( 391) 00:17:16.281 3.490 - 3.505: 86.9317% ( 280) 00:17:16.281 3.505 - 3.520: 87.6712% ( 122) 00:17:16.281 3.520 - 3.535: 87.9561% ( 47) 00:17:16.281 3.535 - 3.550: 88.3259% ( 61) 00:17:16.281 3.550 - 3.566: 88.8411% ( 85) 00:17:16.281 3.566 - 3.581: 89.6654% ( 136) 00:17:16.281 3.581 - 3.596: 90.5443% ( 145) 00:17:16.281 3.596 - 3.611: 91.4959% ( 157) 00:17:16.281 3.611 - 3.627: 92.3809% ( 146) 00:17:16.281 3.627 - 3.642: 93.3386% ( 158) 00:17:16.281 3.642 - 3.657: 94.2357% ( 148) 00:17:16.281 3.657 - 3.672: 95.3388% ( 182) 00:17:16.281 3.672 - 3.688: 96.3268% ( 163) 00:17:16.281 3.688 - 3.703: 97.1390% ( 134) 00:17:16.281 3.703 - 3.718: 97.8361% ( 115) 00:17:16.281 3.718 - 3.733: 98.3877% ( 91) 00:17:16.281 3.733 - 3.749: 98.8483% ( 76) 00:17:16.281 3.749 - 3.764: 99.1514% ( 50) 00:17:16.281 3.764 - 3.779: 99.3090% ( 26) 00:17:16.281 3.779 - 3.794: 99.3999% ( 15) 00:17:16.281 3.794 - 3.810: 99.5212% ( 20) 00:17:16.281 3.810 - 3.825: 99.5636% ( 7) 00:17:16.281 3.825 - 3.840: 99.6060% ( 7) 00:17:16.281 3.840 - 3.855: 99.6363% ( 5) 00:17:16.281 3.855 - 3.870: 99.6606% ( 4) 00:17:16.281 3.870 - 3.886: 99.6727% ( 2) 00:17:16.281 4.023 - 4.053: 99.6787% ( 1) 00:17:16.281 4.114 - 4.145: 99.6848% ( 1) 00:17:16.281 5.486 - 5.516: 99.6909% ( 1) 00:17:16.281 5.516 - 5.547: 99.6969% ( 1) 00:17:16.281 5.943 - 5.973: 99.7030% ( 1) 00:17:16.281 6.034 - 6.065: 99.7151% ( 2) 00:17:16.281 6.065 - 6.095: 99.7212% ( 1) 00:17:16.281 6.187 - 6.217: 99.7272% ( 1) 00:17:16.281 6.309 - 6.339: 99.7333% ( 1) 00:17:16.281 6.370 - 6.400: 99.7394% ( 1) 00:17:16.281 6.552 - 6.583: 99.7454% ( 1) 00:17:16.281 6.674 - 6.705: 99.7575% ( 2) 00:17:16.281 6.766 - 6.796: 99.7697% ( 2) 00:17:16.281 6.796 - 6.827: 99.7757% ( 1) 00:17:16.281 6.857 - 6.888: 99.7818% ( 1) 00:17:16.281 6.888 - 6.918: 99.7879% ( 1) 00:17:16.281 6.949 - 6.979: 99.8000% ( 2) 00:17:16.282 6.979 - 7.010: 99.8121% ( 2) 00:17:16.282 7.010 - 7.040: 99.8242% ( 2) 00:17:16.282 7.101 - 7.131: 99.8303% ( 1) 00:17:16.282 7.192 - 7.223: 99.8363% ( 1) 00:17:16.282 7.253 - 7.284: 99.8485% ( 2) 00:17:16.282 7.284 - 7.314: 99.8545% ( 1) 00:17:16.282 7.314 - 7.345: 99.8606% ( 1) 00:17:16.282 7.375 - 7.406: 99.8667% ( 1) 00:17:16.282 7.436 - 7.467: 99.8727% ( 1) 00:17:16.282 7.650 - 7.680: 99.8788% ( 1) 00:17:16.282 7.710 - 7.741: 99.8848% ( 1) 00:17:16.282 7.771 - 7.802: 99.8909% ( 1) 00:17:16.282 8.168 - 8.229: 99.9030% ( 2) 00:17:16.282 8.290 - 8.350: 99.9091% ( 1) 00:17:16.282 8.411 - 8.472: 99.9151% ( 1) 00:17:16.282 8.472 - 8.533: 99.9212% ( 1) 00:17:16.282 [2024-11-20 08:14:29.990171] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:16.282 8.655 - 8.716: 99.9273% ( 1) 00:17:16.282 8.716 - 8.777: 99.9394% ( 2) 00:17:16.282 8.777 - 8.838: 99.9454% ( 1) 00:17:16.282 9.021 - 9.082: 99.9515% ( 1) 00:17:16.282 9.082 - 9.143: 99.9576% ( 1) 00:17:16.282 3994.575 - 4025.783: 100.0000% ( 7) 00:17:16.282 00:17:16.282 Complete histogram 00:17:16.282 ================== 00:17:16.282 Range in us Cumulative Count 00:17:16.282 1.707 - 1.714: 0.0061% ( 1) 00:17:16.282 1.714 - 1.722: 0.0606% ( 9) 00:17:16.282 1.722 - 1.730: 0.1091% ( 8) 00:17:16.282 1.730 - 1.737: 0.1333% ( 4) 00:17:16.282 1.745 - 1.752: 0.1394% ( 1) 00:17:16.282 1.752 - 1.760: 0.7880% ( 107) 00:17:16.282 1.760 - 1.768: 9.2617% ( 1398) 00:17:16.282 1.768 - 1.775: 33.2828% ( 3963) 00:17:16.282 1.775 - 1.783: 48.7271% ( 2548) 00:17:16.282 1.783 - 1.790: 52.7518% ( 664) 00:17:16.282 1.790 - 1.798: 54.4733% ( 284) 00:17:16.282 1.798 - 1.806: 56.1280% ( 273) 00:17:16.282 1.806 - 1.813: 61.8742% ( 948) 00:17:16.282 1.813 - 1.821: 76.3608% ( 2390) 00:17:16.282 1.821 - 1.829: 88.0895% ( 1935) 00:17:16.282 1.829 - 1.836: 92.6658% ( 755) 00:17:16.282 1.836 - 1.844: 94.7691% ( 347) 00:17:16.282 1.844 - 1.851: 96.5935% ( 301) 00:17:16.282 1.851 - 1.859: 97.7331% ( 188) 00:17:16.282 1.859 - 1.867: 98.2119% ( 79) 00:17:16.282 1.867 - 1.874: 98.4362% ( 37) 00:17:16.282 1.874 - 1.882: 98.5695% ( 22) 00:17:16.282 1.882 - 1.890: 98.7150% ( 24) 00:17:16.282 1.890 - 1.897: 98.8968% ( 30) 00:17:16.282 1.897 - 1.905: 99.0423% ( 24) 00:17:16.282 1.905 - 1.912: 99.1696% ( 21) 00:17:16.282 1.912 - 1.920: 99.1938% ( 4) 00:17:16.282 1.920 - 1.928: 99.2181% ( 4) 00:17:16.282 1.935 - 1.943: 99.2241% ( 1) 00:17:16.282 1.943 - 1.950: 99.2302% ( 1) 00:17:16.282 1.950 - 1.966: 99.2363% ( 1) 00:17:16.282 1.966 - 1.981: 99.2605% ( 4) 00:17:16.282 1.981 - 1.996: 99.2787% ( 3) 00:17:16.282 1.996 - 2.011: 99.2908% ( 2) 00:17:16.282 2.011 - 2.027: 99.3029% ( 2) 00:17:16.282 2.072 - 2.088: 99.3090% ( 1) 00:17:16.282 2.301 - 2.316: 99.3151% ( 1) 00:17:16.282 3.672 - 3.688: 99.3211% ( 1) 00:17:16.282 3.840 - 3.855: 99.3333% ( 2) 00:17:16.282 4.023 - 4.053: 99.3393% ( 1) 00:17:16.282 4.175 - 4.206: 99.3454% ( 1) 00:17:16.282 4.236 - 4.267: 99.3514% ( 1) 00:17:16.282 4.450 - 4.480: 99.3575% ( 1) 00:17:16.282 4.602 - 4.632: 99.3636% ( 1) 00:17:16.282 4.693 - 4.724: 99.3696% ( 1) 00:17:16.282 4.815 - 4.846: 99.3757% ( 1) 00:17:16.282 5.120 - 5.150: 99.3817% ( 1) 00:17:16.282 5.150 - 5.181: 99.3878% ( 1) 00:17:16.282 5.425 - 5.455: 99.3939% ( 1) 00:17:16.282 5.455 - 5.486: 99.4060% ( 2) 00:17:16.282 5.516 - 5.547: 99.4120% ( 1) 00:17:16.282 5.821 - 5.851: 99.4181% ( 1) 00:17:16.282 5.851 - 5.882: 99.4242% ( 1) 00:17:16.282 6.004 - 6.034: 99.4302% ( 1) 00:17:16.282 6.065 - 6.095: 99.4363% ( 1) 00:17:16.282 6.187 - 6.217: 99.4424% ( 1) 00:17:16.282 6.613 - 6.644: 99.4484% ( 1) 00:17:16.282 6.705 - 6.735: 99.4545% ( 1) 00:17:16.282 6.796 - 6.827: 99.4605% ( 1) 00:17:16.282 6.857 - 6.888: 99.4666% ( 1) 00:17:16.282 7.710 - 7.741: 99.4727% ( 1) 00:17:16.282 8.168 - 8.229: 99.4787% ( 1) 00:17:16.282 8.838 - 8.899: 99.4848% ( 1) 00:17:16.282 9.570 - 9.630: 99.4908% ( 1) 00:17:16.282 50.225 - 50.469: 99.4969% ( 1) 00:17:16.282 2730.667 - 2746.270: 99.5030% ( 1) 00:17:16.282 3994.575 - 4025.783: 99.9939% ( 81) 00:17:16.282 4993.219 - 5024.427: 100.0000% ( 1) 00:17:16.282 00:17:16.282 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:17:16.282 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:17:16.282 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:17:16.282 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:17:16.282 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:16.282 [ 00:17:16.282 { 00:17:16.282 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:16.282 "subtype": "Discovery", 00:17:16.282 "listen_addresses": [], 00:17:16.282 "allow_any_host": true, 00:17:16.282 "hosts": [] 00:17:16.282 }, 00:17:16.282 { 00:17:16.282 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:16.282 "subtype": "NVMe", 00:17:16.282 "listen_addresses": [ 00:17:16.282 { 00:17:16.282 "trtype": "VFIOUSER", 00:17:16.282 "adrfam": "IPv4", 00:17:16.282 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:16.282 "trsvcid": "0" 00:17:16.282 } 00:17:16.282 ], 00:17:16.282 "allow_any_host": true, 00:17:16.282 "hosts": [], 00:17:16.282 "serial_number": "SPDK1", 00:17:16.282 "model_number": "SPDK bdev Controller", 00:17:16.282 "max_namespaces": 32, 00:17:16.282 "min_cntlid": 1, 00:17:16.282 "max_cntlid": 65519, 00:17:16.282 "namespaces": [ 00:17:16.282 { 00:17:16.282 "nsid": 1, 00:17:16.282 "bdev_name": "Malloc1", 00:17:16.282 "name": "Malloc1", 00:17:16.282 "nguid": "5E14E16D9FBB450A8559C3B5210D2626", 00:17:16.282 "uuid": "5e14e16d-9fbb-450a-8559-c3b5210d2626" 00:17:16.282 }, 00:17:16.282 { 00:17:16.282 "nsid": 2, 00:17:16.282 "bdev_name": "Malloc3", 00:17:16.282 "name": "Malloc3", 00:17:16.282 "nguid": "102D6114430742AA88F6A0780C19EC78", 00:17:16.282 "uuid": "102d6114-4307-42aa-88f6-a0780c19ec78" 00:17:16.282 } 00:17:16.282 ] 00:17:16.282 }, 00:17:16.282 { 00:17:16.282 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:16.282 "subtype": "NVMe", 00:17:16.282 "listen_addresses": [ 00:17:16.282 { 00:17:16.282 "trtype": "VFIOUSER", 00:17:16.282 "adrfam": "IPv4", 00:17:16.282 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:16.282 "trsvcid": "0" 00:17:16.282 } 00:17:16.282 ], 00:17:16.282 "allow_any_host": true, 00:17:16.282 "hosts": [], 00:17:16.282 "serial_number": "SPDK2", 00:17:16.282 "model_number": "SPDK bdev Controller", 00:17:16.282 "max_namespaces": 32, 00:17:16.282 "min_cntlid": 1, 00:17:16.282 "max_cntlid": 65519, 00:17:16.282 "namespaces": [ 00:17:16.282 { 00:17:16.282 "nsid": 1, 00:17:16.282 "bdev_name": "Malloc2", 00:17:16.282 "name": "Malloc2", 00:17:16.282 "nguid": "CF9A85CB24A742008B8FF7F9754BFE15", 00:17:16.282 "uuid": "cf9a85cb-24a7-4200-8b8f-f7f9754bfe15" 00:17:16.282 } 00:17:16.282 ] 00:17:16.283 } 00:17:16.283 ] 00:17:16.283 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:17:16.283 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=1668448 00:17:16.283 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:17:16.283 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:17:16.283 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:17:16.283 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:16.283 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:17:16.283 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:17:16.283 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:17:16.283 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:17:16.541 [2024-11-20 08:14:30.404619] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:17:16.541 Malloc4 00:17:16.541 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:17:16.798 [2024-11-20 08:14:30.649436] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:17:16.798 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:17:16.798 Asynchronous Event Request test 00:17:16.798 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:17:16.798 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:17:16.798 Registering asynchronous event callbacks... 00:17:16.798 Starting namespace attribute notice tests for all controllers... 00:17:16.798 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:17:16.798 aer_cb - Changed Namespace 00:17:16.798 Cleaning up... 00:17:17.057 [ 00:17:17.057 { 00:17:17.057 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:17:17.057 "subtype": "Discovery", 00:17:17.057 "listen_addresses": [], 00:17:17.057 "allow_any_host": true, 00:17:17.057 "hosts": [] 00:17:17.057 }, 00:17:17.057 { 00:17:17.057 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:17:17.057 "subtype": "NVMe", 00:17:17.057 "listen_addresses": [ 00:17:17.057 { 00:17:17.057 "trtype": "VFIOUSER", 00:17:17.057 "adrfam": "IPv4", 00:17:17.057 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:17:17.057 "trsvcid": "0" 00:17:17.057 } 00:17:17.057 ], 00:17:17.057 "allow_any_host": true, 00:17:17.057 "hosts": [], 00:17:17.057 "serial_number": "SPDK1", 00:17:17.057 "model_number": "SPDK bdev Controller", 00:17:17.057 "max_namespaces": 32, 00:17:17.057 "min_cntlid": 1, 00:17:17.057 "max_cntlid": 65519, 00:17:17.057 "namespaces": [ 00:17:17.057 { 00:17:17.057 "nsid": 1, 00:17:17.057 "bdev_name": "Malloc1", 00:17:17.057 "name": "Malloc1", 00:17:17.057 "nguid": "5E14E16D9FBB450A8559C3B5210D2626", 00:17:17.057 "uuid": "5e14e16d-9fbb-450a-8559-c3b5210d2626" 00:17:17.057 }, 00:17:17.057 { 00:17:17.057 "nsid": 2, 00:17:17.057 "bdev_name": "Malloc3", 00:17:17.057 "name": "Malloc3", 00:17:17.057 "nguid": "102D6114430742AA88F6A0780C19EC78", 00:17:17.057 "uuid": "102d6114-4307-42aa-88f6-a0780c19ec78" 00:17:17.057 } 00:17:17.057 ] 00:17:17.057 }, 00:17:17.057 { 00:17:17.057 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:17:17.057 "subtype": "NVMe", 00:17:17.057 "listen_addresses": [ 00:17:17.057 { 00:17:17.057 "trtype": "VFIOUSER", 00:17:17.057 "adrfam": "IPv4", 00:17:17.057 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:17:17.057 "trsvcid": "0" 00:17:17.057 } 00:17:17.057 ], 00:17:17.057 "allow_any_host": true, 00:17:17.057 "hosts": [], 00:17:17.057 "serial_number": "SPDK2", 00:17:17.057 "model_number": "SPDK bdev Controller", 00:17:17.057 "max_namespaces": 32, 00:17:17.057 "min_cntlid": 1, 00:17:17.057 "max_cntlid": 65519, 00:17:17.057 "namespaces": [ 00:17:17.057 { 00:17:17.057 "nsid": 1, 00:17:17.057 "bdev_name": "Malloc2", 00:17:17.057 "name": "Malloc2", 00:17:17.057 "nguid": "CF9A85CB24A742008B8FF7F9754BFE15", 00:17:17.057 "uuid": "cf9a85cb-24a7-4200-8b8f-f7f9754bfe15" 00:17:17.057 }, 00:17:17.057 { 00:17:17.057 "nsid": 2, 00:17:17.057 "bdev_name": "Malloc4", 00:17:17.057 "name": "Malloc4", 00:17:17.057 "nguid": "D105095CEE20482AAB551A3E25641C4B", 00:17:17.057 "uuid": "d105095c-ee20-482a-ab55-1a3e25641c4b" 00:17:17.057 } 00:17:17.057 ] 00:17:17.057 } 00:17:17.057 ] 00:17:17.057 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 1668448 00:17:17.057 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:17:17.057 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1660792 00:17:17.057 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1660792 ']' 00:17:17.057 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1660792 00:17:17.057 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:17.057 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:17.057 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1660792 00:17:17.057 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:17.057 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:17.057 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1660792' 00:17:17.057 killing process with pid 1660792 00:17:17.057 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1660792 00:17:17.057 08:14:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1660792 00:17:17.316 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:17.316 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:17.316 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:17:17.316 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:17:17.316 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:17:17.316 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=1668651 00:17:17.316 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:17:17.316 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 1668651' 00:17:17.316 Process pid: 1668651 00:17:17.316 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:17.316 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 1668651 00:17:17.316 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 1668651 ']' 00:17:17.316 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.316 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.316 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.316 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.316 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.316 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:17.316 [2024-11-20 08:14:31.211987] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:17:17.316 [2024-11-20 08:14:31.212828] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:17:17.316 [2024-11-20 08:14:31.212868] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.316 [2024-11-20 08:14:31.274681] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:17.316 [2024-11-20 08:14:31.317484] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.316 [2024-11-20 08:14:31.317523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.316 [2024-11-20 08:14:31.317530] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.316 [2024-11-20 08:14:31.317535] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.316 [2024-11-20 08:14:31.317540] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.316 [2024-11-20 08:14:31.319075] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.316 [2024-11-20 08:14:31.319116] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.316 [2024-11-20 08:14:31.319249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.316 [2024-11-20 08:14:31.319249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:17.576 [2024-11-20 08:14:31.388233] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:17:17.576 [2024-11-20 08:14:31.388762] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:17:17.576 [2024-11-20 08:14:31.389214] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:17:17.576 [2024-11-20 08:14:31.389377] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:17:17.576 [2024-11-20 08:14:31.389509] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:17:17.576 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.576 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:17:17.576 08:14:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:17:18.515 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:17:18.774 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:17:18.774 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:17:18.774 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:18.774 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:17:18.774 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:19.033 Malloc1 00:17:19.033 08:14:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:17:19.291 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:17:19.291 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:17:19.549 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:17:19.549 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:17:19.549 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:19.806 Malloc2 00:17:19.806 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:17:20.063 08:14:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:17:20.063 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:17:20.321 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:17:20.321 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 1668651 00:17:20.321 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 1668651 ']' 00:17:20.321 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 1668651 00:17:20.321 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:17:20.321 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:20.321 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1668651 00:17:20.321 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:20.321 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:20.321 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1668651' 00:17:20.321 killing process with pid 1668651 00:17:20.321 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 1668651 00:17:20.321 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 1668651 00:17:20.581 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:17:20.581 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:17:20.581 00:17:20.581 real 0m50.791s 00:17:20.581 user 3m16.534s 00:17:20.581 sys 0m3.197s 00:17:20.581 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:20.581 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:17:20.581 ************************************ 00:17:20.581 END TEST nvmf_vfio_user 00:17:20.581 ************************************ 00:17:20.581 08:14:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:20.581 08:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:20.581 08:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:20.581 08:14:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:20.581 ************************************ 00:17:20.581 START TEST nvmf_vfio_user_nvme_compliance 00:17:20.581 ************************************ 00:17:20.581 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:17:20.841 * Looking for test storage... 00:17:20.841 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:20.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.841 --rc genhtml_branch_coverage=1 00:17:20.841 --rc genhtml_function_coverage=1 00:17:20.841 --rc genhtml_legend=1 00:17:20.841 --rc geninfo_all_blocks=1 00:17:20.841 --rc geninfo_unexecuted_blocks=1 00:17:20.841 00:17:20.841 ' 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:20.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.841 --rc genhtml_branch_coverage=1 00:17:20.841 --rc genhtml_function_coverage=1 00:17:20.841 --rc genhtml_legend=1 00:17:20.841 --rc geninfo_all_blocks=1 00:17:20.841 --rc geninfo_unexecuted_blocks=1 00:17:20.841 00:17:20.841 ' 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:20.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.841 --rc genhtml_branch_coverage=1 00:17:20.841 --rc genhtml_function_coverage=1 00:17:20.841 --rc genhtml_legend=1 00:17:20.841 --rc geninfo_all_blocks=1 00:17:20.841 --rc geninfo_unexecuted_blocks=1 00:17:20.841 00:17:20.841 ' 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:20.841 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:20.841 --rc genhtml_branch_coverage=1 00:17:20.841 --rc genhtml_function_coverage=1 00:17:20.841 --rc genhtml_legend=1 00:17:20.841 --rc geninfo_all_blocks=1 00:17:20.841 --rc geninfo_unexecuted_blocks=1 00:17:20.841 00:17:20.841 ' 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:20.841 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@50 -- # : 0 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:20.842 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=1669406 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 1669406' 00:17:20.842 Process pid: 1669406 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 1669406 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 1669406 ']' 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:20.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:20.842 08:14:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:20.842 [2024-11-20 08:14:34.843326] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:17:20.842 [2024-11-20 08:14:34.843375] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:21.100 [2024-11-20 08:14:34.916060] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:21.100 [2024-11-20 08:14:34.957341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:21.100 [2024-11-20 08:14:34.957377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:21.100 [2024-11-20 08:14:34.957384] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:21.100 [2024-11-20 08:14:34.957390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:21.100 [2024-11-20 08:14:34.957395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:21.100 [2024-11-20 08:14:34.958764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:21.100 [2024-11-20 08:14:34.958877] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:21.100 [2024-11-20 08:14:34.958878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:21.100 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:21.100 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:17:21.100 08:14:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:17:22.031 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:22.031 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:17:22.031 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:22.289 malloc0 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.289 08:14:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:17:22.289 00:17:22.289 00:17:22.289 CUnit - A unit testing framework for C - Version 2.1-3 00:17:22.289 http://cunit.sourceforge.net/ 00:17:22.289 00:17:22.289 00:17:22.289 Suite: nvme_compliance 00:17:22.289 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 08:14:36.286598] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:22.289 [2024-11-20 08:14:36.287926] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:17:22.289 [2024-11-20 08:14:36.287941] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:17:22.289 [2024-11-20 08:14:36.287947] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:17:22.289 [2024-11-20 08:14:36.289617] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:22.547 passed 00:17:22.547 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 08:14:36.370195] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:22.547 [2024-11-20 08:14:36.373219] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:22.547 passed 00:17:22.547 Test: admin_identify_ns ...[2024-11-20 08:14:36.452951] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:22.547 [2024-11-20 08:14:36.513215] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:17:22.547 [2024-11-20 08:14:36.521210] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:17:22.547 [2024-11-20 08:14:36.542297] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:22.547 passed 00:17:22.804 Test: admin_get_features_mandatory_features ...[2024-11-20 08:14:36.619188] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:22.804 [2024-11-20 08:14:36.622211] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:22.804 passed 00:17:22.804 Test: admin_get_features_optional_features ...[2024-11-20 08:14:36.700706] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:22.804 [2024-11-20 08:14:36.706745] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:22.804 passed 00:17:22.804 Test: admin_set_features_number_of_queues ...[2024-11-20 08:14:36.785546] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:23.061 [2024-11-20 08:14:36.891297] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:23.061 passed 00:17:23.061 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 08:14:36.963987] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:23.061 [2024-11-20 08:14:36.969023] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:23.061 passed 00:17:23.061 Test: admin_get_log_page_with_lpo ...[2024-11-20 08:14:37.046721] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:23.319 [2024-11-20 08:14:37.118212] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:17:23.319 [2024-11-20 08:14:37.131282] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:23.319 passed 00:17:23.319 Test: fabric_property_get ...[2024-11-20 08:14:37.208042] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:23.319 [2024-11-20 08:14:37.209285] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:17:23.319 [2024-11-20 08:14:37.211063] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:23.319 passed 00:17:23.319 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 08:14:37.291578] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:23.319 [2024-11-20 08:14:37.292814] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:17:23.319 [2024-11-20 08:14:37.294600] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:23.319 passed 00:17:23.576 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 08:14:37.373497] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:23.576 [2024-11-20 08:14:37.460210] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:23.576 [2024-11-20 08:14:37.476207] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:23.576 [2024-11-20 08:14:37.481295] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:23.576 passed 00:17:23.576 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 08:14:37.557982] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:23.576 [2024-11-20 08:14:37.559217] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:17:23.576 [2024-11-20 08:14:37.561004] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:23.576 passed 00:17:23.835 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 08:14:37.638688] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:23.835 [2024-11-20 08:14:37.715211] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:23.835 [2024-11-20 08:14:37.739206] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:17:23.835 [2024-11-20 08:14:37.744295] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:23.835 passed 00:17:23.835 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 08:14:37.817959] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:23.835 [2024-11-20 08:14:37.819197] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:17:23.835 [2024-11-20 08:14:37.819228] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:17:23.835 [2024-11-20 08:14:37.820988] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:23.835 passed 00:17:24.092 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 08:14:37.898476] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:24.092 [2024-11-20 08:14:37.990208] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:17:24.092 [2024-11-20 08:14:37.998207] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:17:24.092 [2024-11-20 08:14:38.006216] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:17:24.092 [2024-11-20 08:14:38.014214] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:17:24.092 [2024-11-20 08:14:38.046329] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:24.092 passed 00:17:24.350 Test: admin_create_io_sq_verify_pc ...[2024-11-20 08:14:38.120050] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:24.350 [2024-11-20 08:14:38.135214] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:17:24.350 [2024-11-20 08:14:38.153221] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:24.350 passed 00:17:24.350 Test: admin_create_io_qp_max_qps ...[2024-11-20 08:14:38.230749] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:25.717 [2024-11-20 08:14:39.342213] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:17:25.717 [2024-11-20 08:14:39.716928] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:25.973 passed 00:17:25.973 Test: admin_create_io_sq_shared_cq ...[2024-11-20 08:14:39.792904] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:17:25.973 [2024-11-20 08:14:39.925207] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:17:25.973 [2024-11-20 08:14:39.962268] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:17:25.973 passed 00:17:25.973 00:17:25.973 Run Summary: Type Total Ran Passed Failed Inactive 00:17:25.973 suites 1 1 n/a 0 0 00:17:25.973 tests 18 18 18 0 0 00:17:25.973 asserts 360 360 360 0 n/a 00:17:25.973 00:17:25.973 Elapsed time = 1.509 seconds 00:17:26.231 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 1669406 00:17:26.231 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 1669406 ']' 00:17:26.231 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 1669406 00:17:26.231 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:17:26.231 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:26.231 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1669406 00:17:26.231 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:26.231 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:26.231 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1669406' 00:17:26.231 killing process with pid 1669406 00:17:26.231 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 1669406 00:17:26.231 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 1669406 00:17:26.231 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:17:26.231 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:17:26.231 00:17:26.232 real 0m5.659s 00:17:26.232 user 0m15.868s 00:17:26.232 sys 0m0.498s 00:17:26.232 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:26.232 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:17:26.232 ************************************ 00:17:26.232 END TEST nvmf_vfio_user_nvme_compliance 00:17:26.232 ************************************ 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:26.490 ************************************ 00:17:26.490 START TEST nvmf_vfio_user_fuzz 00:17:26.490 ************************************ 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:17:26.490 * Looking for test storage... 00:17:26.490 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:26.490 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:26.490 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.490 --rc genhtml_branch_coverage=1 00:17:26.490 --rc genhtml_function_coverage=1 00:17:26.490 --rc genhtml_legend=1 00:17:26.490 --rc geninfo_all_blocks=1 00:17:26.490 --rc geninfo_unexecuted_blocks=1 00:17:26.490 00:17:26.490 ' 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:26.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.491 --rc genhtml_branch_coverage=1 00:17:26.491 --rc genhtml_function_coverage=1 00:17:26.491 --rc genhtml_legend=1 00:17:26.491 --rc geninfo_all_blocks=1 00:17:26.491 --rc geninfo_unexecuted_blocks=1 00:17:26.491 00:17:26.491 ' 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:26.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.491 --rc genhtml_branch_coverage=1 00:17:26.491 --rc genhtml_function_coverage=1 00:17:26.491 --rc genhtml_legend=1 00:17:26.491 --rc geninfo_all_blocks=1 00:17:26.491 --rc geninfo_unexecuted_blocks=1 00:17:26.491 00:17:26.491 ' 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:26.491 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:26.491 --rc genhtml_branch_coverage=1 00:17:26.491 --rc genhtml_function_coverage=1 00:17:26.491 --rc genhtml_legend=1 00:17:26.491 --rc geninfo_all_blocks=1 00:17:26.491 --rc geninfo_unexecuted_blocks=1 00:17:26.491 00:17:26.491 ' 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.491 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@50 -- # : 0 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:26.750 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=1670394 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 1670394' 00:17:26.750 Process pid: 1670394 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 1670394 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 1670394 ']' 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:26.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:26.750 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:27.008 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:27.008 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:17:27.008 08:14:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:17:27.941 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:17:27.941 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.941 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:27.941 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.941 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:17:27.941 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:17:27.942 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.942 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:27.942 malloc0 00:17:27.942 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.942 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:17:27.942 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.942 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:27.942 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.942 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:17:27.942 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.942 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:27.942 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.942 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:17:27.942 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.942 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:17:27.942 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.942 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:17:27.942 08:14:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:18:00.006 Fuzzing completed. Shutting down the fuzz application 00:18:00.006 00:18:00.006 Dumping successful admin opcodes: 00:18:00.006 8, 9, 10, 24, 00:18:00.006 Dumping successful io opcodes: 00:18:00.006 0, 00:18:00.006 NS: 0x20000081ef00 I/O qp, Total commands completed: 1005394, total successful commands: 3939, random_seed: 1903846080 00:18:00.006 NS: 0x20000081ef00 admin qp, Total commands completed: 243398, total successful commands: 1959, random_seed: 2601853312 00:18:00.006 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:18:00.006 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:00.006 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:00.006 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:00.006 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 1670394 00:18:00.006 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 1670394 ']' 00:18:00.006 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 1670394 00:18:00.006 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:18:00.006 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:00.006 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1670394 00:18:00.006 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:00.006 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:00.006 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1670394' 00:18:00.006 killing process with pid 1670394 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 1670394 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 1670394 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:18:00.007 00:18:00.007 real 0m32.219s 00:18:00.007 user 0m29.990s 00:18:00.007 sys 0m31.130s 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:18:00.007 ************************************ 00:18:00.007 END TEST nvmf_vfio_user_fuzz 00:18:00.007 ************************************ 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:00.007 ************************************ 00:18:00.007 START TEST nvmf_auth_target 00:18:00.007 ************************************ 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:18:00.007 * Looking for test storage... 00:18:00.007 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:00.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.007 --rc genhtml_branch_coverage=1 00:18:00.007 --rc genhtml_function_coverage=1 00:18:00.007 --rc genhtml_legend=1 00:18:00.007 --rc geninfo_all_blocks=1 00:18:00.007 --rc geninfo_unexecuted_blocks=1 00:18:00.007 00:18:00.007 ' 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:00.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.007 --rc genhtml_branch_coverage=1 00:18:00.007 --rc genhtml_function_coverage=1 00:18:00.007 --rc genhtml_legend=1 00:18:00.007 --rc geninfo_all_blocks=1 00:18:00.007 --rc geninfo_unexecuted_blocks=1 00:18:00.007 00:18:00.007 ' 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:00.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.007 --rc genhtml_branch_coverage=1 00:18:00.007 --rc genhtml_function_coverage=1 00:18:00.007 --rc genhtml_legend=1 00:18:00.007 --rc geninfo_all_blocks=1 00:18:00.007 --rc geninfo_unexecuted_blocks=1 00:18:00.007 00:18:00.007 ' 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:00.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:00.007 --rc genhtml_branch_coverage=1 00:18:00.007 --rc genhtml_function_coverage=1 00:18:00.007 --rc genhtml_legend=1 00:18:00.007 --rc geninfo_all_blocks=1 00:18:00.007 --rc geninfo_unexecuted_blocks=1 00:18:00.007 00:18:00.007 ' 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.007 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@50 -- # : 0 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:18:00.008 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # remove_target_ns 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # xtrace_disable 00:18:00.008 08:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@131 -- # pci_devs=() 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@135 -- # net_devs=() 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@136 -- # e810=() 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@136 -- # local -ga e810 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@137 -- # x722=() 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@137 -- # local -ga x722 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@138 -- # mlx=() 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@138 -- # local -ga mlx 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:05.286 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:05.286 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.286 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:05.286 Found net devices under 0000:86:00.0: cvl_0_0 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:05.287 Found net devices under 0000:86:00.1: cvl_0_1 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # is_hw=yes 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@247 -- # create_target_ns 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@28 -- # local -g _dev 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # ips=() 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772161 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:18:05.287 10.0.0.1 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772162 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:18:05.287 10.0.0.2 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:05.287 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:18:05.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:05.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.459 ms 00:18:05.288 00:18:05.288 --- 10.0.0.1 ping statistics --- 00:18:05.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.288 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target0 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:18:05.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:05.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:18:05.288 00:18:05.288 --- 10.0.0.2 ping statistics --- 00:18:05.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:05.288 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # return 0 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # return 1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev= 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@160 -- # return 0 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target0 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # local dev=target1 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:18:05.288 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # return 1 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@159 -- # dev= 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@160 -- # return 0 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:18:05.289 ' 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=1679233 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 1679233 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1679233 ']' 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.289 08:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=1679254 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=null 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=b4f6c0fd7fde42da09b8c96b345c0596dc9cbcdb3fefc63a 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.Xil 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key b4f6c0fd7fde42da09b8c96b345c0596dc9cbcdb3fefc63a 0 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 b4f6c0fd7fde42da09b8c96b345c0596dc9cbcdb3fefc63a 0 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=b4f6c0fd7fde42da09b8c96b345c0596dc9cbcdb3fefc63a 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=0 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.Xil 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.Xil 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.Xil 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=b582bd7217e8479a265dc7abc8247374f4454a7c04c69dd2878ccf66b34f622d 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.N7S 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key b582bd7217e8479a265dc7abc8247374f4454a7c04c69dd2878ccf66b34f622d 3 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 b582bd7217e8479a265dc7abc8247374f4454a7c04c69dd2878ccf66b34f622d 3 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=b582bd7217e8479a265dc7abc8247374f4454a7c04c69dd2878ccf66b34f622d 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:18:05.289 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.N7S 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.N7S 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.N7S 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=1f1d82df6dd20938f17b7d8d704a49d4 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.rYa 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 1f1d82df6dd20938f17b7d8d704a49d4 1 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 1f1d82df6dd20938f17b7d8d704a49d4 1 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=1f1d82df6dd20938f17b7d8d704a49d4 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.rYa 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.rYa 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.rYa 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=f2f5eed7ead24c5775058bbea8bfd908bc18a98a6d2327b9 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.cCy 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key f2f5eed7ead24c5775058bbea8bfd908bc18a98a6d2327b9 2 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 f2f5eed7ead24c5775058bbea8bfd908bc18a98a6d2327b9 2 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=f2f5eed7ead24c5775058bbea8bfd908bc18a98a6d2327b9 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.cCy 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.cCy 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.cCy 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=66804a430ff87743055952bf097e9dbf6f9fa29ad3f55d92 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.oLO 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 66804a430ff87743055952bf097e9dbf6f9fa29ad3f55d92 2 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 66804a430ff87743055952bf097e9dbf6f9fa29ad3f55d92 2 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=66804a430ff87743055952bf097e9dbf6f9fa29ad3f55d92 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.oLO 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.oLO 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.oLO 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:18:05.549 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:18:05.550 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:18:05.550 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=9a7dddec69a2d7bd6a4f07ca65ad3ef8 00:18:05.550 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:18:05.550 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.NJd 00:18:05.550 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 9a7dddec69a2d7bd6a4f07ca65ad3ef8 1 00:18:05.550 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 9a7dddec69a2d7bd6a4f07ca65ad3ef8 1 00:18:05.550 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:18:05.550 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:18:05.550 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=9a7dddec69a2d7bd6a4f07ca65ad3ef8 00:18:05.550 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:18:05.550 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:18:05.550 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.NJd 00:18:05.550 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.NJd 00:18:05.550 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.NJd 00:18:05.550 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:18:05.550 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:18:05.550 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:18:05.550 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:18:05.550 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:18:05.550 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:18:05.808 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:18:05.808 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=54cf8a50dbe94b2b8397fbe477f9c67c7ae449cbdd1093450cdfe0ead4156396 00:18:05.808 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:18:05.808 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.kYm 00:18:05.808 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 54cf8a50dbe94b2b8397fbe477f9c67c7ae449cbdd1093450cdfe0ead4156396 3 00:18:05.809 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 54cf8a50dbe94b2b8397fbe477f9c67c7ae449cbdd1093450cdfe0ead4156396 3 00:18:05.809 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:18:05.809 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:18:05.809 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=54cf8a50dbe94b2b8397fbe477f9c67c7ae449cbdd1093450cdfe0ead4156396 00:18:05.809 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:18:05.809 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:18:05.809 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.kYm 00:18:05.809 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.kYm 00:18:05.809 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.kYm 00:18:05.809 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:18:05.809 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 1679233 00:18:05.809 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1679233 ']' 00:18:05.809 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:05.809 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:05.809 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:05.809 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:05.809 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:05.809 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.067 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.067 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:06.067 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 1679254 /var/tmp/host.sock 00:18:06.067 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1679254 ']' 00:18:06.067 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:06.067 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:06.067 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:06.067 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:06.067 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:06.067 08:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.067 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:06.067 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:18:06.067 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:18:06.067 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.067 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.067 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.067 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:06.067 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Xil 00:18:06.067 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.067 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.067 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.067 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.Xil 00:18:06.067 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.Xil 00:18:06.326 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.N7S ]] 00:18:06.326 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.N7S 00:18:06.326 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.326 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.326 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.326 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.N7S 00:18:06.326 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.N7S 00:18:06.585 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:06.585 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.rYa 00:18:06.585 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.585 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.585 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.585 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.rYa 00:18:06.585 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.rYa 00:18:06.844 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.cCy ]] 00:18:06.844 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cCy 00:18:06.844 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:06.844 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.844 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:06.844 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cCy 00:18:06.844 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cCy 00:18:06.844 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:07.103 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.oLO 00:18:07.103 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.103 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.103 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.103 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.oLO 00:18:07.103 08:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.oLO 00:18:07.103 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.NJd ]] 00:18:07.103 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NJd 00:18:07.103 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.103 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.103 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.103 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NJd 00:18:07.103 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NJd 00:18:07.362 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:18:07.362 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.kYm 00:18:07.362 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.362 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.362 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.362 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.kYm 00:18:07.362 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.kYm 00:18:07.621 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:18:07.621 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:07.621 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:07.621 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:07.621 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:07.621 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:07.621 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:18:07.621 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:07.621 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:07.621 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:07.621 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:07.621 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.621 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.621 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:07.622 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.881 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:07.881 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.881 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.881 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:07.881 00:18:07.881 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:07.881 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:07.881 08:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:08.139 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:08.139 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:08.139 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:08.139 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.139 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:08.139 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:08.139 { 00:18:08.139 "cntlid": 1, 00:18:08.139 "qid": 0, 00:18:08.139 "state": "enabled", 00:18:08.139 "thread": "nvmf_tgt_poll_group_000", 00:18:08.139 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:08.139 "listen_address": { 00:18:08.139 "trtype": "TCP", 00:18:08.139 "adrfam": "IPv4", 00:18:08.139 "traddr": "10.0.0.2", 00:18:08.139 "trsvcid": "4420" 00:18:08.139 }, 00:18:08.139 "peer_address": { 00:18:08.139 "trtype": "TCP", 00:18:08.139 "adrfam": "IPv4", 00:18:08.139 "traddr": "10.0.0.1", 00:18:08.139 "trsvcid": "47782" 00:18:08.139 }, 00:18:08.139 "auth": { 00:18:08.139 "state": "completed", 00:18:08.139 "digest": "sha256", 00:18:08.139 "dhgroup": "null" 00:18:08.139 } 00:18:08.139 } 00:18:08.139 ]' 00:18:08.139 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:08.139 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:08.139 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:08.397 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:08.397 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:08.397 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.397 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.397 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.656 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:18:08.656 08:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:18:09.223 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:09.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:09.224 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:09.224 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.224 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.224 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.224 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:09.224 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:09.224 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:09.224 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:18:09.224 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:09.224 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:09.224 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:09.224 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:09.224 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:09.483 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.483 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.483 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.483 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.483 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.483 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.483 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:09.483 00:18:09.741 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:09.741 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:09.741 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.741 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.741 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.741 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:09.741 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.741 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:09.741 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:09.741 { 00:18:09.741 "cntlid": 3, 00:18:09.741 "qid": 0, 00:18:09.741 "state": "enabled", 00:18:09.741 "thread": "nvmf_tgt_poll_group_000", 00:18:09.741 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:09.741 "listen_address": { 00:18:09.741 "trtype": "TCP", 00:18:09.741 "adrfam": "IPv4", 00:18:09.741 "traddr": "10.0.0.2", 00:18:09.741 "trsvcid": "4420" 00:18:09.741 }, 00:18:09.741 "peer_address": { 00:18:09.741 "trtype": "TCP", 00:18:09.741 "adrfam": "IPv4", 00:18:09.741 "traddr": "10.0.0.1", 00:18:09.741 "trsvcid": "47814" 00:18:09.741 }, 00:18:09.741 "auth": { 00:18:09.741 "state": "completed", 00:18:09.741 "digest": "sha256", 00:18:09.741 "dhgroup": "null" 00:18:09.741 } 00:18:09.741 } 00:18:09.741 ]' 00:18:09.741 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:09.742 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:09.742 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:10.000 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:10.000 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:10.000 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:10.000 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:10.000 08:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:10.259 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:18:10.259 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:18:10.826 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.826 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:10.826 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.826 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.826 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.826 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:10.826 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:10.826 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:10.826 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:18:10.826 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:10.826 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:10.826 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:10.826 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:10.826 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.826 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.826 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:10.826 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.826 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:10.826 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.826 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:10.827 08:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:11.085 00:18:11.344 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:11.344 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:11.344 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.344 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.344 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.344 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:11.344 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.344 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:11.344 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:11.344 { 00:18:11.344 "cntlid": 5, 00:18:11.344 "qid": 0, 00:18:11.344 "state": "enabled", 00:18:11.344 "thread": "nvmf_tgt_poll_group_000", 00:18:11.344 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:11.344 "listen_address": { 00:18:11.344 "trtype": "TCP", 00:18:11.344 "adrfam": "IPv4", 00:18:11.344 "traddr": "10.0.0.2", 00:18:11.344 "trsvcid": "4420" 00:18:11.344 }, 00:18:11.344 "peer_address": { 00:18:11.344 "trtype": "TCP", 00:18:11.344 "adrfam": "IPv4", 00:18:11.344 "traddr": "10.0.0.1", 00:18:11.344 "trsvcid": "47836" 00:18:11.344 }, 00:18:11.344 "auth": { 00:18:11.344 "state": "completed", 00:18:11.344 "digest": "sha256", 00:18:11.344 "dhgroup": "null" 00:18:11.344 } 00:18:11.344 } 00:18:11.344 ]' 00:18:11.344 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:11.603 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:11.603 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:11.603 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:11.603 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:11.603 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.603 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.603 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.861 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:18:11.861 08:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:18:12.433 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:12.433 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:12.433 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:12.433 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.433 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.433 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.433 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:12.433 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:12.433 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:18:12.433 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:18:12.433 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:12.433 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:12.433 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:12.433 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:12.433 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.434 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:12.434 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.434 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.434 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.434 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:12.434 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.434 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:12.692 00:18:12.692 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:12.692 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:12.692 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.951 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.951 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.951 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:12.951 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.951 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:12.951 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:12.951 { 00:18:12.951 "cntlid": 7, 00:18:12.951 "qid": 0, 00:18:12.951 "state": "enabled", 00:18:12.951 "thread": "nvmf_tgt_poll_group_000", 00:18:12.952 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:12.952 "listen_address": { 00:18:12.952 "trtype": "TCP", 00:18:12.952 "adrfam": "IPv4", 00:18:12.952 "traddr": "10.0.0.2", 00:18:12.952 "trsvcid": "4420" 00:18:12.952 }, 00:18:12.952 "peer_address": { 00:18:12.952 "trtype": "TCP", 00:18:12.952 "adrfam": "IPv4", 00:18:12.952 "traddr": "10.0.0.1", 00:18:12.952 "trsvcid": "47876" 00:18:12.952 }, 00:18:12.952 "auth": { 00:18:12.952 "state": "completed", 00:18:12.952 "digest": "sha256", 00:18:12.952 "dhgroup": "null" 00:18:12.952 } 00:18:12.952 } 00:18:12.952 ]' 00:18:12.952 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:12.952 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:12.952 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:13.211 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:13.211 08:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:13.211 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.211 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.211 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.211 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:18:13.211 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:18:13.778 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.779 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:13.779 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.779 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.037 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.037 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:14.037 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:14.037 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:14.037 08:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:14.037 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:18:14.037 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:14.037 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:14.037 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:14.037 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:14.037 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:14.037 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.037 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.037 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.037 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.037 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.037 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.038 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:14.296 00:18:14.296 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:14.296 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:14.296 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.555 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.555 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.555 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.555 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.555 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.555 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:14.555 { 00:18:14.555 "cntlid": 9, 00:18:14.555 "qid": 0, 00:18:14.555 "state": "enabled", 00:18:14.555 "thread": "nvmf_tgt_poll_group_000", 00:18:14.555 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:14.555 "listen_address": { 00:18:14.555 "trtype": "TCP", 00:18:14.555 "adrfam": "IPv4", 00:18:14.555 "traddr": "10.0.0.2", 00:18:14.555 "trsvcid": "4420" 00:18:14.555 }, 00:18:14.555 "peer_address": { 00:18:14.555 "trtype": "TCP", 00:18:14.555 "adrfam": "IPv4", 00:18:14.555 "traddr": "10.0.0.1", 00:18:14.555 "trsvcid": "47900" 00:18:14.555 }, 00:18:14.555 "auth": { 00:18:14.555 "state": "completed", 00:18:14.555 "digest": "sha256", 00:18:14.555 "dhgroup": "ffdhe2048" 00:18:14.555 } 00:18:14.555 } 00:18:14.555 ]' 00:18:14.555 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:14.555 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:14.555 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:14.555 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:14.555 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:14.814 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.814 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.814 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.814 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:18:14.814 08:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:18:15.381 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.381 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.381 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:15.381 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.381 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.381 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.381 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:15.381 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:15.381 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:15.640 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:18:15.640 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:15.640 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:15.640 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:15.640 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:15.640 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.640 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.640 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:15.640 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.640 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:15.640 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.640 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.640 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:15.899 00:18:15.899 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:15.899 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:15.899 08:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.159 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.159 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.159 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.159 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.159 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.159 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:16.159 { 00:18:16.159 "cntlid": 11, 00:18:16.159 "qid": 0, 00:18:16.159 "state": "enabled", 00:18:16.159 "thread": "nvmf_tgt_poll_group_000", 00:18:16.159 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:16.159 "listen_address": { 00:18:16.159 "trtype": "TCP", 00:18:16.159 "adrfam": "IPv4", 00:18:16.159 "traddr": "10.0.0.2", 00:18:16.159 "trsvcid": "4420" 00:18:16.159 }, 00:18:16.159 "peer_address": { 00:18:16.159 "trtype": "TCP", 00:18:16.159 "adrfam": "IPv4", 00:18:16.159 "traddr": "10.0.0.1", 00:18:16.159 "trsvcid": "50908" 00:18:16.159 }, 00:18:16.159 "auth": { 00:18:16.159 "state": "completed", 00:18:16.159 "digest": "sha256", 00:18:16.159 "dhgroup": "ffdhe2048" 00:18:16.159 } 00:18:16.159 } 00:18:16.159 ]' 00:18:16.159 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:16.159 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:16.159 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:16.159 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:16.159 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:16.418 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.418 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.418 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.418 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:18:16.418 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:18:16.984 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:16.984 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:16.984 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:16.984 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:16.984 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.984 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:16.984 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:16.984 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:16.984 08:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:17.241 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:18:17.241 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:17.241 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:17.241 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:17.241 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:17.241 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.241 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.241 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.241 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.241 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.241 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.241 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.241 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:17.504 00:18:17.504 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:17.504 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:17.504 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:17.765 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:17.765 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:17.765 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:17.765 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.765 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:17.765 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:17.765 { 00:18:17.765 "cntlid": 13, 00:18:17.765 "qid": 0, 00:18:17.765 "state": "enabled", 00:18:17.765 "thread": "nvmf_tgt_poll_group_000", 00:18:17.765 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:17.765 "listen_address": { 00:18:17.765 "trtype": "TCP", 00:18:17.765 "adrfam": "IPv4", 00:18:17.765 "traddr": "10.0.0.2", 00:18:17.765 "trsvcid": "4420" 00:18:17.765 }, 00:18:17.765 "peer_address": { 00:18:17.765 "trtype": "TCP", 00:18:17.765 "adrfam": "IPv4", 00:18:17.765 "traddr": "10.0.0.1", 00:18:17.765 "trsvcid": "50948" 00:18:17.765 }, 00:18:17.765 "auth": { 00:18:17.765 "state": "completed", 00:18:17.765 "digest": "sha256", 00:18:17.765 "dhgroup": "ffdhe2048" 00:18:17.765 } 00:18:17.765 } 00:18:17.765 ]' 00:18:17.765 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:17.765 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:17.765 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:17.765 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:17.765 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:17.765 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:17.765 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:17.765 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.023 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:18:18.023 08:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:18:18.621 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.621 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:18.621 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.621 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.621 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.621 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:18.621 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:18.621 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:18:18.893 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:18:18.893 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:18.893 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:18.893 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:18.893 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:18.893 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:18.893 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:18.893 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:18.893 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.893 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:18.893 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:18.893 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:18.893 08:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:19.169 00:18:19.169 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:19.169 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:19.169 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:19.451 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:19.451 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:19.451 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:19.451 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.451 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:19.451 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:19.451 { 00:18:19.451 "cntlid": 15, 00:18:19.451 "qid": 0, 00:18:19.451 "state": "enabled", 00:18:19.451 "thread": "nvmf_tgt_poll_group_000", 00:18:19.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:19.451 "listen_address": { 00:18:19.451 "trtype": "TCP", 00:18:19.451 "adrfam": "IPv4", 00:18:19.451 "traddr": "10.0.0.2", 00:18:19.451 "trsvcid": "4420" 00:18:19.451 }, 00:18:19.451 "peer_address": { 00:18:19.451 "trtype": "TCP", 00:18:19.451 "adrfam": "IPv4", 00:18:19.451 "traddr": "10.0.0.1", 00:18:19.451 "trsvcid": "50976" 00:18:19.451 }, 00:18:19.451 "auth": { 00:18:19.451 "state": "completed", 00:18:19.451 "digest": "sha256", 00:18:19.451 "dhgroup": "ffdhe2048" 00:18:19.451 } 00:18:19.451 } 00:18:19.451 ]' 00:18:19.451 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:19.451 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:19.451 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:19.451 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:19.451 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:19.451 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:19.451 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:19.451 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:19.724 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:18:19.724 08:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:18:20.290 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:20.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:20.290 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:20.290 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.290 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.290 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.290 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:20.290 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:20.290 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:20.290 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:20.548 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:18:20.548 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:20.548 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:20.548 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:20.548 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:20.548 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:20.548 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.548 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.548 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.548 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.548 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.548 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.548 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:20.548 00:18:20.806 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:20.806 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:20.806 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:20.806 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:20.806 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:20.806 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:20.806 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.806 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:20.806 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:20.806 { 00:18:20.806 "cntlid": 17, 00:18:20.806 "qid": 0, 00:18:20.806 "state": "enabled", 00:18:20.806 "thread": "nvmf_tgt_poll_group_000", 00:18:20.806 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:20.806 "listen_address": { 00:18:20.806 "trtype": "TCP", 00:18:20.806 "adrfam": "IPv4", 00:18:20.806 "traddr": "10.0.0.2", 00:18:20.806 "trsvcid": "4420" 00:18:20.806 }, 00:18:20.806 "peer_address": { 00:18:20.806 "trtype": "TCP", 00:18:20.806 "adrfam": "IPv4", 00:18:20.806 "traddr": "10.0.0.1", 00:18:20.806 "trsvcid": "51004" 00:18:20.807 }, 00:18:20.807 "auth": { 00:18:20.807 "state": "completed", 00:18:20.807 "digest": "sha256", 00:18:20.807 "dhgroup": "ffdhe3072" 00:18:20.807 } 00:18:20.807 } 00:18:20.807 ]' 00:18:20.807 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:21.064 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:21.064 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:21.064 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:21.064 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:21.064 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:21.064 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:21.064 08:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:21.323 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:18:21.323 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:21.889 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:21.889 08:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:22.148 00:18:22.148 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:22.148 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:22.148 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.406 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.406 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.406 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:22.406 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.406 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:22.406 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:22.406 { 00:18:22.406 "cntlid": 19, 00:18:22.406 "qid": 0, 00:18:22.406 "state": "enabled", 00:18:22.406 "thread": "nvmf_tgt_poll_group_000", 00:18:22.406 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:22.406 "listen_address": { 00:18:22.406 "trtype": "TCP", 00:18:22.406 "adrfam": "IPv4", 00:18:22.406 "traddr": "10.0.0.2", 00:18:22.406 "trsvcid": "4420" 00:18:22.406 }, 00:18:22.406 "peer_address": { 00:18:22.406 "trtype": "TCP", 00:18:22.406 "adrfam": "IPv4", 00:18:22.406 "traddr": "10.0.0.1", 00:18:22.406 "trsvcid": "51038" 00:18:22.407 }, 00:18:22.407 "auth": { 00:18:22.407 "state": "completed", 00:18:22.407 "digest": "sha256", 00:18:22.407 "dhgroup": "ffdhe3072" 00:18:22.407 } 00:18:22.407 } 00:18:22.407 ]' 00:18:22.407 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:22.407 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:22.407 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:22.665 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:22.665 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:22.665 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.665 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.665 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.665 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:18:22.665 08:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:18:23.229 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.229 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.229 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:23.229 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.229 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.487 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.487 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:23.487 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:23.487 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:23.487 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:18:23.487 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:23.487 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:23.487 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:23.487 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:23.487 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:23.487 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.487 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:23.487 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.487 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:23.487 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.487 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.487 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:23.744 00:18:23.744 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:23.744 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:23.744 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.001 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.001 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:24.001 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:24.001 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.001 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:24.001 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:24.001 { 00:18:24.001 "cntlid": 21, 00:18:24.001 "qid": 0, 00:18:24.001 "state": "enabled", 00:18:24.001 "thread": "nvmf_tgt_poll_group_000", 00:18:24.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:24.001 "listen_address": { 00:18:24.001 "trtype": "TCP", 00:18:24.001 "adrfam": "IPv4", 00:18:24.001 "traddr": "10.0.0.2", 00:18:24.001 "trsvcid": "4420" 00:18:24.001 }, 00:18:24.001 "peer_address": { 00:18:24.001 "trtype": "TCP", 00:18:24.001 "adrfam": "IPv4", 00:18:24.001 "traddr": "10.0.0.1", 00:18:24.001 "trsvcid": "51056" 00:18:24.001 }, 00:18:24.001 "auth": { 00:18:24.001 "state": "completed", 00:18:24.001 "digest": "sha256", 00:18:24.001 "dhgroup": "ffdhe3072" 00:18:24.001 } 00:18:24.001 } 00:18:24.001 ]' 00:18:24.001 08:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:24.001 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:24.001 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:24.259 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:24.259 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:24.259 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:24.259 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.259 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.516 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:18:24.516 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:18:25.081 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:25.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:25.081 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:25.081 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.081 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.081 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.081 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:25.081 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:25.081 08:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:18:25.081 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:18:25.081 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:25.081 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:25.081 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:25.081 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:25.081 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:25.081 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:25.081 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.081 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.081 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.081 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:25.081 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.081 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:25.337 00:18:25.337 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:25.337 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:25.337 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:25.595 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:25.595 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:25.595 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.595 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:25.595 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.595 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:25.595 { 00:18:25.595 "cntlid": 23, 00:18:25.595 "qid": 0, 00:18:25.595 "state": "enabled", 00:18:25.595 "thread": "nvmf_tgt_poll_group_000", 00:18:25.595 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:25.595 "listen_address": { 00:18:25.595 "trtype": "TCP", 00:18:25.595 "adrfam": "IPv4", 00:18:25.595 "traddr": "10.0.0.2", 00:18:25.595 "trsvcid": "4420" 00:18:25.595 }, 00:18:25.595 "peer_address": { 00:18:25.595 "trtype": "TCP", 00:18:25.595 "adrfam": "IPv4", 00:18:25.595 "traddr": "10.0.0.1", 00:18:25.595 "trsvcid": "51076" 00:18:25.595 }, 00:18:25.595 "auth": { 00:18:25.595 "state": "completed", 00:18:25.595 "digest": "sha256", 00:18:25.595 "dhgroup": "ffdhe3072" 00:18:25.595 } 00:18:25.595 } 00:18:25.595 ]' 00:18:25.595 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:25.595 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:25.595 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:25.595 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:25.595 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:25.853 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:25.853 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:25.853 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:25.853 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:18:25.853 08:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:18:26.419 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:26.419 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:26.419 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:26.419 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.419 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.419 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.419 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:26.419 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:26.419 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:26.419 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:26.677 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:18:26.677 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:26.677 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:26.677 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:26.677 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:26.677 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:26.677 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.677 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:26.677 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:26.677 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:26.677 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.677 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.677 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:26.935 00:18:26.935 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:26.935 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:26.935 08:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:27.194 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:27.194 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:27.194 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:27.194 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.194 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:27.194 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:27.194 { 00:18:27.194 "cntlid": 25, 00:18:27.194 "qid": 0, 00:18:27.194 "state": "enabled", 00:18:27.194 "thread": "nvmf_tgt_poll_group_000", 00:18:27.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:27.194 "listen_address": { 00:18:27.194 "trtype": "TCP", 00:18:27.194 "adrfam": "IPv4", 00:18:27.194 "traddr": "10.0.0.2", 00:18:27.194 "trsvcid": "4420" 00:18:27.194 }, 00:18:27.194 "peer_address": { 00:18:27.194 "trtype": "TCP", 00:18:27.194 "adrfam": "IPv4", 00:18:27.194 "traddr": "10.0.0.1", 00:18:27.194 "trsvcid": "56536" 00:18:27.194 }, 00:18:27.194 "auth": { 00:18:27.194 "state": "completed", 00:18:27.194 "digest": "sha256", 00:18:27.194 "dhgroup": "ffdhe4096" 00:18:27.194 } 00:18:27.194 } 00:18:27.194 ]' 00:18:27.194 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:27.194 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:27.194 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:27.194 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:27.194 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:27.451 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:27.451 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:27.451 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:27.451 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:18:27.451 08:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:18:28.015 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:28.015 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:28.015 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:28.015 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.015 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.273 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.273 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:28.273 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:28.273 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:28.273 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:18:28.273 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:28.273 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:28.273 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:28.273 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:28.273 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:28.273 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.273 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.273 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.273 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.273 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.273 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.273 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:28.530 00:18:28.530 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:28.530 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:28.530 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:28.786 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.786 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:28.786 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.786 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:28.786 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.786 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:28.786 { 00:18:28.786 "cntlid": 27, 00:18:28.786 "qid": 0, 00:18:28.786 "state": "enabled", 00:18:28.786 "thread": "nvmf_tgt_poll_group_000", 00:18:28.786 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:28.786 "listen_address": { 00:18:28.786 "trtype": "TCP", 00:18:28.786 "adrfam": "IPv4", 00:18:28.786 "traddr": "10.0.0.2", 00:18:28.786 "trsvcid": "4420" 00:18:28.786 }, 00:18:28.786 "peer_address": { 00:18:28.786 "trtype": "TCP", 00:18:28.786 "adrfam": "IPv4", 00:18:28.786 "traddr": "10.0.0.1", 00:18:28.786 "trsvcid": "56556" 00:18:28.786 }, 00:18:28.786 "auth": { 00:18:28.786 "state": "completed", 00:18:28.786 "digest": "sha256", 00:18:28.786 "dhgroup": "ffdhe4096" 00:18:28.786 } 00:18:28.786 } 00:18:28.786 ]' 00:18:28.786 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:28.786 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:28.786 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:28.786 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:28.786 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:29.043 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:29.043 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:29.043 08:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:29.043 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:18:29.043 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:18:29.607 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:29.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:29.607 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:29.607 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.607 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.607 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.607 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:29.607 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:29.607 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:29.863 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:18:29.863 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:29.863 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:29.863 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:29.863 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:29.863 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:29.863 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.863 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:29.863 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:29.863 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:29.863 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.863 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:29.863 08:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:30.121 00:18:30.121 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:30.121 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:30.121 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:30.379 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:30.379 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:30.379 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:30.379 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:30.379 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:30.379 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:30.379 { 00:18:30.379 "cntlid": 29, 00:18:30.379 "qid": 0, 00:18:30.379 "state": "enabled", 00:18:30.379 "thread": "nvmf_tgt_poll_group_000", 00:18:30.379 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:30.379 "listen_address": { 00:18:30.379 "trtype": "TCP", 00:18:30.379 "adrfam": "IPv4", 00:18:30.379 "traddr": "10.0.0.2", 00:18:30.379 "trsvcid": "4420" 00:18:30.379 }, 00:18:30.379 "peer_address": { 00:18:30.379 "trtype": "TCP", 00:18:30.379 "adrfam": "IPv4", 00:18:30.379 "traddr": "10.0.0.1", 00:18:30.379 "trsvcid": "56580" 00:18:30.379 }, 00:18:30.379 "auth": { 00:18:30.379 "state": "completed", 00:18:30.379 "digest": "sha256", 00:18:30.379 "dhgroup": "ffdhe4096" 00:18:30.379 } 00:18:30.379 } 00:18:30.379 ]' 00:18:30.379 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:30.379 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:30.379 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:30.379 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:30.379 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:30.636 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:30.636 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:30.636 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:30.636 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:18:30.636 08:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:18:31.201 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:31.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:31.201 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:31.201 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.201 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.201 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.201 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:31.201 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:31.201 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:18:31.459 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:18:31.459 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:31.459 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:31.459 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:18:31.459 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:31.459 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:31.459 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:31.459 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.459 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.459 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.459 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:31.459 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.459 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:31.716 00:18:31.716 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:31.716 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:31.716 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:31.974 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:31.974 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:31.974 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:31.974 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:31.974 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:31.974 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:31.974 { 00:18:31.974 "cntlid": 31, 00:18:31.974 "qid": 0, 00:18:31.974 "state": "enabled", 00:18:31.974 "thread": "nvmf_tgt_poll_group_000", 00:18:31.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:31.974 "listen_address": { 00:18:31.974 "trtype": "TCP", 00:18:31.975 "adrfam": "IPv4", 00:18:31.975 "traddr": "10.0.0.2", 00:18:31.975 "trsvcid": "4420" 00:18:31.975 }, 00:18:31.975 "peer_address": { 00:18:31.975 "trtype": "TCP", 00:18:31.975 "adrfam": "IPv4", 00:18:31.975 "traddr": "10.0.0.1", 00:18:31.975 "trsvcid": "56606" 00:18:31.975 }, 00:18:31.975 "auth": { 00:18:31.975 "state": "completed", 00:18:31.975 "digest": "sha256", 00:18:31.975 "dhgroup": "ffdhe4096" 00:18:31.975 } 00:18:31.975 } 00:18:31.975 ]' 00:18:31.975 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:31.975 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:31.975 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:31.975 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:31.975 08:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:32.233 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:32.233 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:32.233 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:32.233 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:18:32.233 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:18:32.800 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:32.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:32.800 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:32.800 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:32.800 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:32.800 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:32.800 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:32.800 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:32.800 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:32.800 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:33.057 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:18:33.057 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:33.057 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:33.057 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:33.057 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:33.057 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:33.057 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.057 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.057 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.057 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.057 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.057 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.057 08:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:33.314 00:18:33.572 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:33.572 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:33.572 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:33.572 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:33.572 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:33.572 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:33.572 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:33.572 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:33.572 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:33.572 { 00:18:33.572 "cntlid": 33, 00:18:33.572 "qid": 0, 00:18:33.572 "state": "enabled", 00:18:33.572 "thread": "nvmf_tgt_poll_group_000", 00:18:33.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:33.572 "listen_address": { 00:18:33.572 "trtype": "TCP", 00:18:33.572 "adrfam": "IPv4", 00:18:33.572 "traddr": "10.0.0.2", 00:18:33.572 "trsvcid": "4420" 00:18:33.572 }, 00:18:33.572 "peer_address": { 00:18:33.572 "trtype": "TCP", 00:18:33.572 "adrfam": "IPv4", 00:18:33.572 "traddr": "10.0.0.1", 00:18:33.572 "trsvcid": "56618" 00:18:33.572 }, 00:18:33.572 "auth": { 00:18:33.572 "state": "completed", 00:18:33.572 "digest": "sha256", 00:18:33.572 "dhgroup": "ffdhe6144" 00:18:33.572 } 00:18:33.572 } 00:18:33.572 ]' 00:18:33.572 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:33.831 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:33.831 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:33.831 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:33.831 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:33.831 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:33.831 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:33.831 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:34.088 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:18:34.088 08:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:34.654 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:34.654 08:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:35.219 00:18:35.219 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:35.219 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:35.219 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:35.219 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:35.219 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:35.219 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.219 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:35.219 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.219 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:35.219 { 00:18:35.219 "cntlid": 35, 00:18:35.219 "qid": 0, 00:18:35.219 "state": "enabled", 00:18:35.219 "thread": "nvmf_tgt_poll_group_000", 00:18:35.219 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:35.219 "listen_address": { 00:18:35.219 "trtype": "TCP", 00:18:35.219 "adrfam": "IPv4", 00:18:35.219 "traddr": "10.0.0.2", 00:18:35.219 "trsvcid": "4420" 00:18:35.219 }, 00:18:35.219 "peer_address": { 00:18:35.219 "trtype": "TCP", 00:18:35.219 "adrfam": "IPv4", 00:18:35.219 "traddr": "10.0.0.1", 00:18:35.219 "trsvcid": "56636" 00:18:35.219 }, 00:18:35.219 "auth": { 00:18:35.219 "state": "completed", 00:18:35.219 "digest": "sha256", 00:18:35.219 "dhgroup": "ffdhe6144" 00:18:35.219 } 00:18:35.219 } 00:18:35.219 ]' 00:18:35.219 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:35.477 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:35.477 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:35.477 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:35.477 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:35.477 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:35.477 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:35.477 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:35.735 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:18:35.735 08:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:18:36.305 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:36.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:36.305 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:36.305 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.305 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.305 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.305 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:36.305 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:36.305 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:36.305 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:18:36.305 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:36.305 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:36.305 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:36.305 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:36.305 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:36.305 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.305 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:36.305 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:36.564 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:36.564 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.564 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.564 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:36.823 00:18:36.823 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:36.823 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:36.823 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:37.081 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:37.081 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:37.081 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.081 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.081 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.081 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:37.081 { 00:18:37.081 "cntlid": 37, 00:18:37.081 "qid": 0, 00:18:37.081 "state": "enabled", 00:18:37.081 "thread": "nvmf_tgt_poll_group_000", 00:18:37.081 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:37.081 "listen_address": { 00:18:37.081 "trtype": "TCP", 00:18:37.081 "adrfam": "IPv4", 00:18:37.081 "traddr": "10.0.0.2", 00:18:37.081 "trsvcid": "4420" 00:18:37.081 }, 00:18:37.081 "peer_address": { 00:18:37.081 "trtype": "TCP", 00:18:37.081 "adrfam": "IPv4", 00:18:37.081 "traddr": "10.0.0.1", 00:18:37.081 "trsvcid": "47450" 00:18:37.081 }, 00:18:37.081 "auth": { 00:18:37.081 "state": "completed", 00:18:37.081 "digest": "sha256", 00:18:37.081 "dhgroup": "ffdhe6144" 00:18:37.081 } 00:18:37.081 } 00:18:37.081 ]' 00:18:37.081 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:37.081 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:37.081 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:37.081 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:37.081 08:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:37.081 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:37.081 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:37.081 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:37.340 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:18:37.340 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:18:37.908 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:37.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:37.908 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:37.908 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.908 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:37.908 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.908 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:37.908 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:37.908 08:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:18:38.167 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:18:38.167 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:38.167 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:38.167 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:18:38.167 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:38.167 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:38.167 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:38.167 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.167 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.167 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.167 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:38.167 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:38.167 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:38.426 00:18:38.426 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:38.426 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:38.426 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:38.685 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:38.685 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:38.685 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:38.685 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:38.685 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:38.685 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:38.685 { 00:18:38.685 "cntlid": 39, 00:18:38.685 "qid": 0, 00:18:38.685 "state": "enabled", 00:18:38.685 "thread": "nvmf_tgt_poll_group_000", 00:18:38.685 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:38.685 "listen_address": { 00:18:38.685 "trtype": "TCP", 00:18:38.685 "adrfam": "IPv4", 00:18:38.685 "traddr": "10.0.0.2", 00:18:38.685 "trsvcid": "4420" 00:18:38.685 }, 00:18:38.685 "peer_address": { 00:18:38.685 "trtype": "TCP", 00:18:38.685 "adrfam": "IPv4", 00:18:38.685 "traddr": "10.0.0.1", 00:18:38.685 "trsvcid": "47466" 00:18:38.685 }, 00:18:38.685 "auth": { 00:18:38.685 "state": "completed", 00:18:38.685 "digest": "sha256", 00:18:38.685 "dhgroup": "ffdhe6144" 00:18:38.685 } 00:18:38.685 } 00:18:38.685 ]' 00:18:38.685 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:38.685 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:38.685 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:38.685 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:38.685 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:38.685 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:38.685 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:38.685 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:38.944 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:18:38.944 08:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:18:39.512 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:39.512 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:39.512 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:39.512 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.512 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.512 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.512 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:39.512 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:39.512 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:39.512 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:39.771 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:18:39.771 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:39.771 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:39.771 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:39.771 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:39.771 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:39.771 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.771 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:39.771 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:39.771 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:39.771 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.771 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:39.771 08:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:40.340 00:18:40.340 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:40.340 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:40.340 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:40.599 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:40.599 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:40.599 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:40.599 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:40.599 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:40.599 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:40.599 { 00:18:40.599 "cntlid": 41, 00:18:40.599 "qid": 0, 00:18:40.599 "state": "enabled", 00:18:40.599 "thread": "nvmf_tgt_poll_group_000", 00:18:40.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:40.599 "listen_address": { 00:18:40.599 "trtype": "TCP", 00:18:40.599 "adrfam": "IPv4", 00:18:40.599 "traddr": "10.0.0.2", 00:18:40.599 "trsvcid": "4420" 00:18:40.599 }, 00:18:40.599 "peer_address": { 00:18:40.599 "trtype": "TCP", 00:18:40.599 "adrfam": "IPv4", 00:18:40.599 "traddr": "10.0.0.1", 00:18:40.599 "trsvcid": "47492" 00:18:40.599 }, 00:18:40.599 "auth": { 00:18:40.599 "state": "completed", 00:18:40.599 "digest": "sha256", 00:18:40.599 "dhgroup": "ffdhe8192" 00:18:40.599 } 00:18:40.599 } 00:18:40.599 ]' 00:18:40.599 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:40.599 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:40.599 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:40.599 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:40.599 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:40.599 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:40.599 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:40.599 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:40.858 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:18:40.859 08:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:41.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.427 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:41.995 00:18:41.995 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:41.995 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:41.995 08:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:42.254 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:42.254 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:42.254 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:42.254 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:42.254 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:42.254 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:42.254 { 00:18:42.254 "cntlid": 43, 00:18:42.254 "qid": 0, 00:18:42.254 "state": "enabled", 00:18:42.254 "thread": "nvmf_tgt_poll_group_000", 00:18:42.254 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:42.254 "listen_address": { 00:18:42.254 "trtype": "TCP", 00:18:42.254 "adrfam": "IPv4", 00:18:42.254 "traddr": "10.0.0.2", 00:18:42.254 "trsvcid": "4420" 00:18:42.254 }, 00:18:42.254 "peer_address": { 00:18:42.254 "trtype": "TCP", 00:18:42.254 "adrfam": "IPv4", 00:18:42.254 "traddr": "10.0.0.1", 00:18:42.254 "trsvcid": "47530" 00:18:42.254 }, 00:18:42.254 "auth": { 00:18:42.254 "state": "completed", 00:18:42.254 "digest": "sha256", 00:18:42.254 "dhgroup": "ffdhe8192" 00:18:42.254 } 00:18:42.254 } 00:18:42.254 ]' 00:18:42.254 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:42.255 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:42.255 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:42.255 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:42.255 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:42.255 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:42.255 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:42.255 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:42.513 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:18:42.514 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:18:43.081 08:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:43.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:43.081 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:43.081 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.081 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.081 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.081 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:43.081 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:43.081 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:43.339 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:18:43.339 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:43.339 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:43.339 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:43.339 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:43.339 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:43.339 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.339 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.339 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.339 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.339 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.340 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.340 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:43.928 00:18:43.928 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:43.928 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:43.928 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:43.928 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:43.928 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:43.928 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:43.928 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:43.928 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:43.928 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:43.928 { 00:18:43.928 "cntlid": 45, 00:18:43.928 "qid": 0, 00:18:43.928 "state": "enabled", 00:18:43.928 "thread": "nvmf_tgt_poll_group_000", 00:18:43.928 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:43.928 "listen_address": { 00:18:43.928 "trtype": "TCP", 00:18:43.928 "adrfam": "IPv4", 00:18:43.928 "traddr": "10.0.0.2", 00:18:43.928 "trsvcid": "4420" 00:18:43.928 }, 00:18:43.928 "peer_address": { 00:18:43.928 "trtype": "TCP", 00:18:43.928 "adrfam": "IPv4", 00:18:43.928 "traddr": "10.0.0.1", 00:18:43.928 "trsvcid": "47566" 00:18:43.928 }, 00:18:43.928 "auth": { 00:18:43.928 "state": "completed", 00:18:43.928 "digest": "sha256", 00:18:43.928 "dhgroup": "ffdhe8192" 00:18:43.928 } 00:18:43.928 } 00:18:43.928 ]' 00:18:43.928 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:44.186 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:44.186 08:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:44.186 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:44.187 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:44.187 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:44.187 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:44.187 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:44.445 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:18:44.445 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:18:45.013 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:45.013 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:45.013 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:45.013 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.013 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.013 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.013 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:45.014 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:45.014 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:18:45.014 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:18:45.014 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:45.014 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:18:45.014 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:18:45.014 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:45.014 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:45.014 08:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:45.014 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.014 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.014 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.014 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:45.014 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.014 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:45.657 00:18:45.657 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:45.657 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:45.657 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:45.916 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:45.916 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:45.916 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:45.916 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:45.916 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:45.916 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:45.916 { 00:18:45.916 "cntlid": 47, 00:18:45.916 "qid": 0, 00:18:45.916 "state": "enabled", 00:18:45.916 "thread": "nvmf_tgt_poll_group_000", 00:18:45.916 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:45.916 "listen_address": { 00:18:45.916 "trtype": "TCP", 00:18:45.916 "adrfam": "IPv4", 00:18:45.916 "traddr": "10.0.0.2", 00:18:45.916 "trsvcid": "4420" 00:18:45.916 }, 00:18:45.916 "peer_address": { 00:18:45.916 "trtype": "TCP", 00:18:45.916 "adrfam": "IPv4", 00:18:45.916 "traddr": "10.0.0.1", 00:18:45.916 "trsvcid": "47582" 00:18:45.916 }, 00:18:45.916 "auth": { 00:18:45.916 "state": "completed", 00:18:45.916 "digest": "sha256", 00:18:45.916 "dhgroup": "ffdhe8192" 00:18:45.916 } 00:18:45.916 } 00:18:45.916 ]' 00:18:45.916 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:45.916 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:18:45.916 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:45.916 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:45.916 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:45.916 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:45.917 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:45.917 08:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:46.176 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:18:46.176 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:18:46.745 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:46.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:46.745 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:46.745 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:46.745 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:46.745 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:46.745 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:18:46.745 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:46.745 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:46.745 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:46.745 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:47.005 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:18:47.005 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:47.005 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:47.005 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:47.005 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:47.005 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:47.005 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.005 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.005 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.005 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.005 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.005 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.005 08:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:47.264 00:18:47.264 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:47.264 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:47.264 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:47.264 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:47.264 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:47.264 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:47.264 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:47.522 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:47.522 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:47.522 { 00:18:47.522 "cntlid": 49, 00:18:47.522 "qid": 0, 00:18:47.522 "state": "enabled", 00:18:47.522 "thread": "nvmf_tgt_poll_group_000", 00:18:47.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:47.522 "listen_address": { 00:18:47.522 "trtype": "TCP", 00:18:47.522 "adrfam": "IPv4", 00:18:47.522 "traddr": "10.0.0.2", 00:18:47.522 "trsvcid": "4420" 00:18:47.522 }, 00:18:47.522 "peer_address": { 00:18:47.522 "trtype": "TCP", 00:18:47.522 "adrfam": "IPv4", 00:18:47.522 "traddr": "10.0.0.1", 00:18:47.522 "trsvcid": "39972" 00:18:47.522 }, 00:18:47.522 "auth": { 00:18:47.522 "state": "completed", 00:18:47.522 "digest": "sha384", 00:18:47.522 "dhgroup": "null" 00:18:47.522 } 00:18:47.522 } 00:18:47.522 ]' 00:18:47.523 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:47.523 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:47.523 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:47.523 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:47.523 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:47.523 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:47.523 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:47.523 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:47.781 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:18:47.781 08:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:18:48.349 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:48.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:48.349 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:48.349 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.349 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.349 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.349 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:48.349 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:48.349 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:48.608 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:18:48.608 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:48.608 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:48.608 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:48.608 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:48.609 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:48.609 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.609 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.609 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.609 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.609 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.609 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.609 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:48.868 00:18:48.868 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:48.868 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:48.868 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:48.868 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:48.868 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:48.868 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:48.868 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:48.868 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:48.868 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:48.868 { 00:18:48.868 "cntlid": 51, 00:18:48.868 "qid": 0, 00:18:48.868 "state": "enabled", 00:18:48.868 "thread": "nvmf_tgt_poll_group_000", 00:18:48.868 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:48.868 "listen_address": { 00:18:48.868 "trtype": "TCP", 00:18:48.868 "adrfam": "IPv4", 00:18:48.868 "traddr": "10.0.0.2", 00:18:48.868 "trsvcid": "4420" 00:18:48.868 }, 00:18:48.868 "peer_address": { 00:18:48.868 "trtype": "TCP", 00:18:48.868 "adrfam": "IPv4", 00:18:48.868 "traddr": "10.0.0.1", 00:18:48.868 "trsvcid": "40004" 00:18:48.868 }, 00:18:48.868 "auth": { 00:18:48.868 "state": "completed", 00:18:48.868 "digest": "sha384", 00:18:48.868 "dhgroup": "null" 00:18:48.868 } 00:18:48.868 } 00:18:48.868 ]' 00:18:48.868 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:49.127 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:49.127 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:49.127 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:49.127 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:49.127 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:49.127 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:49.127 08:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:49.385 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:18:49.385 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:49.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:49.954 08:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:50.213 00:18:50.213 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:50.213 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:50.213 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:50.472 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:50.472 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:50.472 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:50.472 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:50.472 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:50.472 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:50.472 { 00:18:50.472 "cntlid": 53, 00:18:50.472 "qid": 0, 00:18:50.472 "state": "enabled", 00:18:50.472 "thread": "nvmf_tgt_poll_group_000", 00:18:50.472 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:50.472 "listen_address": { 00:18:50.472 "trtype": "TCP", 00:18:50.472 "adrfam": "IPv4", 00:18:50.472 "traddr": "10.0.0.2", 00:18:50.472 "trsvcid": "4420" 00:18:50.472 }, 00:18:50.472 "peer_address": { 00:18:50.472 "trtype": "TCP", 00:18:50.472 "adrfam": "IPv4", 00:18:50.472 "traddr": "10.0.0.1", 00:18:50.472 "trsvcid": "40032" 00:18:50.472 }, 00:18:50.472 "auth": { 00:18:50.472 "state": "completed", 00:18:50.472 "digest": "sha384", 00:18:50.472 "dhgroup": "null" 00:18:50.472 } 00:18:50.472 } 00:18:50.472 ]' 00:18:50.472 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:50.472 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:50.472 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:50.472 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:50.472 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:50.731 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:50.731 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:50.731 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:50.731 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:18:50.731 08:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:18:51.300 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:51.300 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:51.300 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:51.300 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.300 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.300 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.300 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:51.300 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:51.300 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:18:51.559 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:18:51.559 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:51.559 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:51.559 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:18:51.559 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:51.559 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:51.559 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:51.559 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:51.559 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:51.559 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:51.559 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:51.559 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:51.559 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:51.817 00:18:51.817 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:51.817 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:51.817 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:52.076 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:52.076 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:52.076 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.076 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.076 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.076 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:52.076 { 00:18:52.076 "cntlid": 55, 00:18:52.076 "qid": 0, 00:18:52.076 "state": "enabled", 00:18:52.076 "thread": "nvmf_tgt_poll_group_000", 00:18:52.076 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:52.076 "listen_address": { 00:18:52.076 "trtype": "TCP", 00:18:52.076 "adrfam": "IPv4", 00:18:52.076 "traddr": "10.0.0.2", 00:18:52.076 "trsvcid": "4420" 00:18:52.076 }, 00:18:52.076 "peer_address": { 00:18:52.076 "trtype": "TCP", 00:18:52.076 "adrfam": "IPv4", 00:18:52.076 "traddr": "10.0.0.1", 00:18:52.076 "trsvcid": "40056" 00:18:52.076 }, 00:18:52.076 "auth": { 00:18:52.076 "state": "completed", 00:18:52.076 "digest": "sha384", 00:18:52.076 "dhgroup": "null" 00:18:52.076 } 00:18:52.076 } 00:18:52.076 ]' 00:18:52.076 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:52.076 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:52.076 08:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:52.076 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:18:52.076 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:52.076 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:52.076 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:52.076 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:52.335 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:18:52.335 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:18:52.903 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:52.903 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:52.903 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:52.903 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:52.903 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:52.903 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:52.903 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:52.903 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:52.903 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:52.903 08:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:53.162 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:18:53.162 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:53.162 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:53.162 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:53.162 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:53.162 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:53.162 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.162 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.162 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.162 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.162 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.162 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.162 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:53.421 00:18:53.421 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:53.421 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:53.421 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:53.679 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:53.679 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:53.679 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:53.679 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:53.679 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:53.679 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:53.679 { 00:18:53.679 "cntlid": 57, 00:18:53.679 "qid": 0, 00:18:53.679 "state": "enabled", 00:18:53.679 "thread": "nvmf_tgt_poll_group_000", 00:18:53.679 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:53.679 "listen_address": { 00:18:53.679 "trtype": "TCP", 00:18:53.679 "adrfam": "IPv4", 00:18:53.679 "traddr": "10.0.0.2", 00:18:53.679 "trsvcid": "4420" 00:18:53.679 }, 00:18:53.679 "peer_address": { 00:18:53.679 "trtype": "TCP", 00:18:53.679 "adrfam": "IPv4", 00:18:53.679 "traddr": "10.0.0.1", 00:18:53.679 "trsvcid": "40096" 00:18:53.679 }, 00:18:53.679 "auth": { 00:18:53.679 "state": "completed", 00:18:53.679 "digest": "sha384", 00:18:53.679 "dhgroup": "ffdhe2048" 00:18:53.679 } 00:18:53.679 } 00:18:53.679 ]' 00:18:53.679 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:53.679 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:53.679 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:53.679 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:53.679 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:53.679 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:53.679 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:53.679 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:53.938 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:18:53.938 08:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:18:54.504 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:54.504 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:54.504 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:54.504 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.504 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.504 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.504 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:54.504 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:54.504 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:54.762 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:18:54.762 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:54.762 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:54.762 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:54.762 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:18:54.762 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:54.762 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.762 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:54.762 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:54.762 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:54.762 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.762 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:54.762 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:55.020 00:18:55.020 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:55.020 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:55.020 08:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:55.278 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:55.278 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:55.278 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:55.278 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:55.278 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:55.278 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:55.278 { 00:18:55.278 "cntlid": 59, 00:18:55.278 "qid": 0, 00:18:55.278 "state": "enabled", 00:18:55.278 "thread": "nvmf_tgt_poll_group_000", 00:18:55.278 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:55.278 "listen_address": { 00:18:55.278 "trtype": "TCP", 00:18:55.278 "adrfam": "IPv4", 00:18:55.278 "traddr": "10.0.0.2", 00:18:55.278 "trsvcid": "4420" 00:18:55.278 }, 00:18:55.278 "peer_address": { 00:18:55.278 "trtype": "TCP", 00:18:55.278 "adrfam": "IPv4", 00:18:55.278 "traddr": "10.0.0.1", 00:18:55.278 "trsvcid": "40140" 00:18:55.278 }, 00:18:55.278 "auth": { 00:18:55.278 "state": "completed", 00:18:55.278 "digest": "sha384", 00:18:55.278 "dhgroup": "ffdhe2048" 00:18:55.278 } 00:18:55.278 } 00:18:55.278 ]' 00:18:55.278 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:55.278 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:55.278 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:55.278 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:55.278 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:55.278 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:55.278 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:55.278 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:55.535 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:18:55.535 08:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:18:56.179 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:56.179 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:56.179 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:56.179 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.179 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.179 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.179 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:56.179 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:56.179 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:56.459 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:18:56.459 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:56.459 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:56.459 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:56.460 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:18:56.460 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:56.460 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.460 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.460 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.460 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.460 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.460 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.460 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:56.751 00:18:56.751 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:56.751 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:56.751 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:56.751 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:56.751 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:56.751 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:56.751 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:56.751 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:56.751 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:56.751 { 00:18:56.751 "cntlid": 61, 00:18:56.751 "qid": 0, 00:18:56.751 "state": "enabled", 00:18:56.751 "thread": "nvmf_tgt_poll_group_000", 00:18:56.751 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:56.751 "listen_address": { 00:18:56.751 "trtype": "TCP", 00:18:56.751 "adrfam": "IPv4", 00:18:56.751 "traddr": "10.0.0.2", 00:18:56.751 "trsvcid": "4420" 00:18:56.751 }, 00:18:56.751 "peer_address": { 00:18:56.751 "trtype": "TCP", 00:18:56.751 "adrfam": "IPv4", 00:18:56.751 "traddr": "10.0.0.1", 00:18:56.751 "trsvcid": "44578" 00:18:56.751 }, 00:18:56.751 "auth": { 00:18:56.751 "state": "completed", 00:18:56.751 "digest": "sha384", 00:18:56.751 "dhgroup": "ffdhe2048" 00:18:56.751 } 00:18:56.751 } 00:18:56.751 ]' 00:18:56.751 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:56.751 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:56.751 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:57.010 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:57.010 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:57.010 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:57.010 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:57.010 08:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:57.269 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:18:57.269 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:57.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:57.837 08:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:18:58.097 00:18:58.097 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:58.097 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:58.097 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:58.355 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:58.355 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:58.355 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:58.355 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:58.355 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:58.356 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:58.356 { 00:18:58.356 "cntlid": 63, 00:18:58.356 "qid": 0, 00:18:58.356 "state": "enabled", 00:18:58.356 "thread": "nvmf_tgt_poll_group_000", 00:18:58.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:58.356 "listen_address": { 00:18:58.356 "trtype": "TCP", 00:18:58.356 "adrfam": "IPv4", 00:18:58.356 "traddr": "10.0.0.2", 00:18:58.356 "trsvcid": "4420" 00:18:58.356 }, 00:18:58.356 "peer_address": { 00:18:58.356 "trtype": "TCP", 00:18:58.356 "adrfam": "IPv4", 00:18:58.356 "traddr": "10.0.0.1", 00:18:58.356 "trsvcid": "44612" 00:18:58.356 }, 00:18:58.356 "auth": { 00:18:58.356 "state": "completed", 00:18:58.356 "digest": "sha384", 00:18:58.356 "dhgroup": "ffdhe2048" 00:18:58.356 } 00:18:58.356 } 00:18:58.356 ]' 00:18:58.356 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:58.356 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:58.356 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:58.356 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:18:58.356 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:18:58.614 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:58.614 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:58.614 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:58.614 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:18:58.614 08:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:18:59.182 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:59.182 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:59.182 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:59.182 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.182 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.441 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.441 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:18:59.441 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:18:59.441 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:59.441 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:18:59.441 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:18:59.441 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:18:59.441 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:18:59.441 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:18:59.441 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:18:59.441 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:59.441 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.441 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.441 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.441 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.441 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.441 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.441 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:59.699 00:18:59.699 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:18:59.699 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:18:59.699 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:59.957 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:59.957 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:59.957 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:59.957 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:59.957 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:59.957 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:18:59.957 { 00:18:59.957 "cntlid": 65, 00:18:59.957 "qid": 0, 00:18:59.957 "state": "enabled", 00:18:59.957 "thread": "nvmf_tgt_poll_group_000", 00:18:59.957 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:18:59.957 "listen_address": { 00:18:59.957 "trtype": "TCP", 00:18:59.957 "adrfam": "IPv4", 00:18:59.957 "traddr": "10.0.0.2", 00:18:59.957 "trsvcid": "4420" 00:18:59.957 }, 00:18:59.957 "peer_address": { 00:18:59.957 "trtype": "TCP", 00:18:59.957 "adrfam": "IPv4", 00:18:59.957 "traddr": "10.0.0.1", 00:18:59.957 "trsvcid": "44656" 00:18:59.957 }, 00:18:59.957 "auth": { 00:18:59.957 "state": "completed", 00:18:59.957 "digest": "sha384", 00:18:59.957 "dhgroup": "ffdhe3072" 00:18:59.957 } 00:18:59.957 } 00:18:59.957 ]' 00:18:59.957 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:18:59.957 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:18:59.957 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:18:59.957 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:18:59.957 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:00.216 08:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:00.216 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:00.216 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:00.216 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:19:00.216 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:19:00.784 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:00.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:00.784 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:00.784 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:00.784 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:00.784 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:00.784 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:00.784 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:00.784 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:01.043 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:19:01.043 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:01.043 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:01.043 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:01.043 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:01.043 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:01.043 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.043 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.043 08:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.043 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.043 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.043 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.043 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:01.301 00:19:01.301 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:01.301 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:01.301 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:01.560 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:01.560 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:01.560 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:01.560 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:01.560 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:01.560 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:01.560 { 00:19:01.560 "cntlid": 67, 00:19:01.560 "qid": 0, 00:19:01.560 "state": "enabled", 00:19:01.560 "thread": "nvmf_tgt_poll_group_000", 00:19:01.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:01.560 "listen_address": { 00:19:01.560 "trtype": "TCP", 00:19:01.560 "adrfam": "IPv4", 00:19:01.560 "traddr": "10.0.0.2", 00:19:01.560 "trsvcid": "4420" 00:19:01.560 }, 00:19:01.560 "peer_address": { 00:19:01.560 "trtype": "TCP", 00:19:01.560 "adrfam": "IPv4", 00:19:01.560 "traddr": "10.0.0.1", 00:19:01.560 "trsvcid": "44694" 00:19:01.560 }, 00:19:01.560 "auth": { 00:19:01.560 "state": "completed", 00:19:01.560 "digest": "sha384", 00:19:01.560 "dhgroup": "ffdhe3072" 00:19:01.560 } 00:19:01.560 } 00:19:01.560 ]' 00:19:01.560 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:01.560 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:01.560 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:01.560 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:01.560 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:01.820 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:01.820 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:01.820 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:01.820 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:19:01.820 08:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:19:02.388 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:02.388 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:02.388 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:02.388 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.388 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.388 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.388 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:02.388 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:02.388 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:02.647 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:19:02.647 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:02.647 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:02.647 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:02.647 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:02.647 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:02.647 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.647 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:02.647 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:02.647 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:02.647 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.647 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.647 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:02.906 00:19:02.906 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:02.906 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:02.906 08:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:03.164 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:03.164 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:03.164 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.164 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.164 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.164 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:03.164 { 00:19:03.164 "cntlid": 69, 00:19:03.164 "qid": 0, 00:19:03.164 "state": "enabled", 00:19:03.164 "thread": "nvmf_tgt_poll_group_000", 00:19:03.164 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:03.164 "listen_address": { 00:19:03.164 "trtype": "TCP", 00:19:03.164 "adrfam": "IPv4", 00:19:03.164 "traddr": "10.0.0.2", 00:19:03.164 "trsvcid": "4420" 00:19:03.164 }, 00:19:03.164 "peer_address": { 00:19:03.164 "trtype": "TCP", 00:19:03.164 "adrfam": "IPv4", 00:19:03.164 "traddr": "10.0.0.1", 00:19:03.164 "trsvcid": "44724" 00:19:03.164 }, 00:19:03.164 "auth": { 00:19:03.164 "state": "completed", 00:19:03.164 "digest": "sha384", 00:19:03.164 "dhgroup": "ffdhe3072" 00:19:03.164 } 00:19:03.164 } 00:19:03.164 ]' 00:19:03.164 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:03.164 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:03.164 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:03.164 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:03.164 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:03.164 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:03.164 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:03.164 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:03.423 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:19:03.423 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:19:03.991 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:03.991 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:03.991 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:03.991 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:03.991 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:03.991 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:03.991 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:03.991 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:03.991 08:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:19:04.251 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:19:04.251 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:04.251 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:04.251 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:04.251 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:04.251 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:04.251 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:04.251 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.251 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.251 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.251 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:04.251 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.251 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:04.510 00:19:04.510 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:04.510 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:04.510 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:04.769 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:04.769 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:04.769 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:04.769 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:04.769 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:04.769 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:04.769 { 00:19:04.769 "cntlid": 71, 00:19:04.769 "qid": 0, 00:19:04.769 "state": "enabled", 00:19:04.769 "thread": "nvmf_tgt_poll_group_000", 00:19:04.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:04.769 "listen_address": { 00:19:04.769 "trtype": "TCP", 00:19:04.769 "adrfam": "IPv4", 00:19:04.769 "traddr": "10.0.0.2", 00:19:04.769 "trsvcid": "4420" 00:19:04.769 }, 00:19:04.769 "peer_address": { 00:19:04.769 "trtype": "TCP", 00:19:04.769 "adrfam": "IPv4", 00:19:04.769 "traddr": "10.0.0.1", 00:19:04.769 "trsvcid": "44766" 00:19:04.769 }, 00:19:04.769 "auth": { 00:19:04.769 "state": "completed", 00:19:04.769 "digest": "sha384", 00:19:04.769 "dhgroup": "ffdhe3072" 00:19:04.769 } 00:19:04.769 } 00:19:04.769 ]' 00:19:04.769 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:04.769 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:04.769 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:04.769 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:04.769 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:04.769 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:04.769 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:04.769 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:05.028 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:19:05.028 08:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:19:05.595 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:05.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:05.595 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:05.595 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.595 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.595 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.595 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:05.595 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:05.595 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:05.595 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:05.852 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:19:05.852 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:05.852 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:05.852 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:05.852 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:05.852 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:05.852 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.852 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:05.852 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:05.852 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:05.852 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.852 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:05.852 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:06.110 00:19:06.110 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:06.110 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:06.110 08:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:06.368 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:06.368 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:06.368 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:06.368 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:06.368 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:06.368 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:06.368 { 00:19:06.368 "cntlid": 73, 00:19:06.368 "qid": 0, 00:19:06.368 "state": "enabled", 00:19:06.368 "thread": "nvmf_tgt_poll_group_000", 00:19:06.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:06.368 "listen_address": { 00:19:06.368 "trtype": "TCP", 00:19:06.368 "adrfam": "IPv4", 00:19:06.368 "traddr": "10.0.0.2", 00:19:06.368 "trsvcid": "4420" 00:19:06.368 }, 00:19:06.369 "peer_address": { 00:19:06.369 "trtype": "TCP", 00:19:06.369 "adrfam": "IPv4", 00:19:06.369 "traddr": "10.0.0.1", 00:19:06.369 "trsvcid": "59260" 00:19:06.369 }, 00:19:06.369 "auth": { 00:19:06.369 "state": "completed", 00:19:06.369 "digest": "sha384", 00:19:06.369 "dhgroup": "ffdhe4096" 00:19:06.369 } 00:19:06.369 } 00:19:06.369 ]' 00:19:06.369 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:06.369 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:06.369 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:06.369 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:06.369 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:06.369 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:06.369 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:06.369 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:06.628 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:19:06.628 08:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:19:07.196 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:07.196 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:07.196 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:07.196 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.196 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.196 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.196 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:07.196 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:07.196 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:07.455 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:19:07.455 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:07.455 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:07.455 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:07.455 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:07.455 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:07.455 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.455 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.455 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.455 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.455 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.455 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.455 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:07.714 00:19:07.714 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:07.714 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:07.714 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:07.974 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:07.974 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:07.974 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:07.974 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:07.974 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:07.974 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:07.974 { 00:19:07.974 "cntlid": 75, 00:19:07.974 "qid": 0, 00:19:07.974 "state": "enabled", 00:19:07.974 "thread": "nvmf_tgt_poll_group_000", 00:19:07.974 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:07.974 "listen_address": { 00:19:07.974 "trtype": "TCP", 00:19:07.974 "adrfam": "IPv4", 00:19:07.974 "traddr": "10.0.0.2", 00:19:07.974 "trsvcid": "4420" 00:19:07.974 }, 00:19:07.974 "peer_address": { 00:19:07.974 "trtype": "TCP", 00:19:07.974 "adrfam": "IPv4", 00:19:07.974 "traddr": "10.0.0.1", 00:19:07.974 "trsvcid": "59282" 00:19:07.974 }, 00:19:07.974 "auth": { 00:19:07.974 "state": "completed", 00:19:07.974 "digest": "sha384", 00:19:07.974 "dhgroup": "ffdhe4096" 00:19:07.974 } 00:19:07.974 } 00:19:07.974 ]' 00:19:07.974 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:07.974 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:07.974 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:07.975 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:07.975 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:07.975 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:07.975 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:07.975 08:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:08.236 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:19:08.236 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:19:08.803 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:08.803 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:08.803 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:08.803 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:08.803 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:08.803 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:08.803 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:08.803 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:08.803 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:09.061 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:19:09.061 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:09.061 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:09.061 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:09.061 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:09.061 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:09.061 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.061 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.061 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.061 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.061 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.061 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.061 08:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:09.320 00:19:09.320 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:09.320 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:09.320 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:09.579 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:09.579 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:09.580 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:09.580 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:09.580 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:09.580 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:09.580 { 00:19:09.580 "cntlid": 77, 00:19:09.580 "qid": 0, 00:19:09.580 "state": "enabled", 00:19:09.580 "thread": "nvmf_tgt_poll_group_000", 00:19:09.580 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:09.580 "listen_address": { 00:19:09.580 "trtype": "TCP", 00:19:09.580 "adrfam": "IPv4", 00:19:09.580 "traddr": "10.0.0.2", 00:19:09.580 "trsvcid": "4420" 00:19:09.580 }, 00:19:09.580 "peer_address": { 00:19:09.580 "trtype": "TCP", 00:19:09.580 "adrfam": "IPv4", 00:19:09.580 "traddr": "10.0.0.1", 00:19:09.580 "trsvcid": "59308" 00:19:09.580 }, 00:19:09.580 "auth": { 00:19:09.580 "state": "completed", 00:19:09.580 "digest": "sha384", 00:19:09.580 "dhgroup": "ffdhe4096" 00:19:09.580 } 00:19:09.580 } 00:19:09.580 ]' 00:19:09.580 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:09.580 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:09.580 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:09.580 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:09.580 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:09.580 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:09.580 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:09.580 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:09.837 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:19:09.837 08:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:19:10.404 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:10.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:10.404 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:10.404 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.404 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.404 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.404 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:10.404 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:10.404 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:19:10.665 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:19:10.665 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:10.665 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:10.665 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:10.665 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:10.665 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:10.665 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:10.665 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:10.665 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.665 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:10.665 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:10.665 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:10.665 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:10.926 00:19:10.927 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:10.927 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:10.927 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:10.927 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:10.927 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:11.185 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:11.185 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:11.186 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:11.186 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:11.186 { 00:19:11.186 "cntlid": 79, 00:19:11.186 "qid": 0, 00:19:11.186 "state": "enabled", 00:19:11.186 "thread": "nvmf_tgt_poll_group_000", 00:19:11.186 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:11.186 "listen_address": { 00:19:11.186 "trtype": "TCP", 00:19:11.186 "adrfam": "IPv4", 00:19:11.186 "traddr": "10.0.0.2", 00:19:11.186 "trsvcid": "4420" 00:19:11.186 }, 00:19:11.186 "peer_address": { 00:19:11.186 "trtype": "TCP", 00:19:11.186 "adrfam": "IPv4", 00:19:11.186 "traddr": "10.0.0.1", 00:19:11.186 "trsvcid": "59336" 00:19:11.186 }, 00:19:11.186 "auth": { 00:19:11.186 "state": "completed", 00:19:11.186 "digest": "sha384", 00:19:11.186 "dhgroup": "ffdhe4096" 00:19:11.186 } 00:19:11.186 } 00:19:11.186 ]' 00:19:11.186 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:11.186 08:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:11.186 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:11.186 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:11.186 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:11.186 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:11.186 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:11.186 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:11.443 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:19:11.443 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:19:12.010 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:12.010 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:12.010 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:12.010 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.010 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.010 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.010 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:12.010 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:12.010 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:12.010 08:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:12.269 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:19:12.269 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:12.269 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:12.269 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:12.269 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:12.269 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:12.269 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.269 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.269 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.269 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.269 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.269 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.269 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:12.527 00:19:12.527 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:12.527 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:12.527 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:12.785 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:12.785 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:12.785 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.785 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:12.785 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.785 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:12.785 { 00:19:12.785 "cntlid": 81, 00:19:12.785 "qid": 0, 00:19:12.785 "state": "enabled", 00:19:12.785 "thread": "nvmf_tgt_poll_group_000", 00:19:12.785 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:12.785 "listen_address": { 00:19:12.785 "trtype": "TCP", 00:19:12.785 "adrfam": "IPv4", 00:19:12.785 "traddr": "10.0.0.2", 00:19:12.785 "trsvcid": "4420" 00:19:12.785 }, 00:19:12.785 "peer_address": { 00:19:12.785 "trtype": "TCP", 00:19:12.785 "adrfam": "IPv4", 00:19:12.785 "traddr": "10.0.0.1", 00:19:12.785 "trsvcid": "59360" 00:19:12.785 }, 00:19:12.785 "auth": { 00:19:12.785 "state": "completed", 00:19:12.785 "digest": "sha384", 00:19:12.785 "dhgroup": "ffdhe6144" 00:19:12.785 } 00:19:12.785 } 00:19:12.785 ]' 00:19:12.785 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:12.785 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:12.785 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:12.785 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:12.785 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:12.785 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:12.785 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:12.785 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:13.044 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:19:13.044 08:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:19:13.625 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:13.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:13.626 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:13.626 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.626 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.626 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.626 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:13.626 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:13.626 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:13.888 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:19:13.888 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:13.888 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:13.888 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:13.888 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:13.888 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:13.888 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.888 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:13.888 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:13.888 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:13.888 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.889 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:13.889 08:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:14.148 00:19:14.148 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:14.148 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:14.148 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:14.407 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:14.407 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:14.407 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:14.407 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:14.407 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:14.407 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:14.407 { 00:19:14.407 "cntlid": 83, 00:19:14.407 "qid": 0, 00:19:14.407 "state": "enabled", 00:19:14.407 "thread": "nvmf_tgt_poll_group_000", 00:19:14.407 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:14.407 "listen_address": { 00:19:14.407 "trtype": "TCP", 00:19:14.407 "adrfam": "IPv4", 00:19:14.407 "traddr": "10.0.0.2", 00:19:14.407 "trsvcid": "4420" 00:19:14.407 }, 00:19:14.407 "peer_address": { 00:19:14.407 "trtype": "TCP", 00:19:14.407 "adrfam": "IPv4", 00:19:14.407 "traddr": "10.0.0.1", 00:19:14.407 "trsvcid": "59380" 00:19:14.407 }, 00:19:14.407 "auth": { 00:19:14.407 "state": "completed", 00:19:14.407 "digest": "sha384", 00:19:14.407 "dhgroup": "ffdhe6144" 00:19:14.407 } 00:19:14.407 } 00:19:14.407 ]' 00:19:14.407 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:14.407 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:14.407 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:14.407 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:14.407 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:14.407 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:14.407 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:14.407 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:14.666 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:19:14.666 08:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:19:15.233 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:15.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:15.233 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:15.233 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.233 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.233 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.233 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:15.234 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:15.234 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:15.492 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:19:15.492 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:15.492 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:15.492 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:15.492 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:15.492 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:15.492 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.492 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:15.492 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:15.492 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:15.492 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.492 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.493 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:15.751 00:19:15.751 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:15.751 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:15.751 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:16.011 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:16.011 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:16.011 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.011 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.011 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.011 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:16.011 { 00:19:16.011 "cntlid": 85, 00:19:16.011 "qid": 0, 00:19:16.011 "state": "enabled", 00:19:16.011 "thread": "nvmf_tgt_poll_group_000", 00:19:16.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:16.011 "listen_address": { 00:19:16.011 "trtype": "TCP", 00:19:16.011 "adrfam": "IPv4", 00:19:16.011 "traddr": "10.0.0.2", 00:19:16.011 "trsvcid": "4420" 00:19:16.011 }, 00:19:16.011 "peer_address": { 00:19:16.011 "trtype": "TCP", 00:19:16.011 "adrfam": "IPv4", 00:19:16.011 "traddr": "10.0.0.1", 00:19:16.011 "trsvcid": "59420" 00:19:16.011 }, 00:19:16.011 "auth": { 00:19:16.011 "state": "completed", 00:19:16.011 "digest": "sha384", 00:19:16.011 "dhgroup": "ffdhe6144" 00:19:16.011 } 00:19:16.011 } 00:19:16.011 ]' 00:19:16.011 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:16.011 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:16.011 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:16.011 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:16.011 08:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:16.271 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:16.271 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:16.271 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:16.271 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:19:16.271 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:19:16.838 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:16.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:16.838 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:16.838 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:16.838 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:16.838 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:16.838 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:16.838 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:16.838 08:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:19:17.097 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:19:17.097 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:17.097 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:17.097 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:17.097 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:17.097 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:17.097 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:17.097 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.097 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.097 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.097 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:17.097 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:17.097 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:17.356 00:19:17.616 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:17.616 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:17.616 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:17.616 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:17.616 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:17.616 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:17.616 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:17.616 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:17.616 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:17.616 { 00:19:17.616 "cntlid": 87, 00:19:17.616 "qid": 0, 00:19:17.616 "state": "enabled", 00:19:17.616 "thread": "nvmf_tgt_poll_group_000", 00:19:17.616 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:17.616 "listen_address": { 00:19:17.616 "trtype": "TCP", 00:19:17.616 "adrfam": "IPv4", 00:19:17.616 "traddr": "10.0.0.2", 00:19:17.616 "trsvcid": "4420" 00:19:17.616 }, 00:19:17.616 "peer_address": { 00:19:17.616 "trtype": "TCP", 00:19:17.616 "adrfam": "IPv4", 00:19:17.616 "traddr": "10.0.0.1", 00:19:17.616 "trsvcid": "56062" 00:19:17.616 }, 00:19:17.616 "auth": { 00:19:17.616 "state": "completed", 00:19:17.616 "digest": "sha384", 00:19:17.616 "dhgroup": "ffdhe6144" 00:19:17.616 } 00:19:17.616 } 00:19:17.616 ]' 00:19:17.616 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:17.616 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:17.616 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:17.874 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:17.874 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:17.874 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:17.874 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:17.874 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:18.133 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:19:18.133 08:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:18.702 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:18.702 08:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:19.270 00:19:19.270 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:19.270 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:19.270 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:19.529 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:19.529 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:19.529 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:19.529 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.529 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:19.530 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:19.530 { 00:19:19.530 "cntlid": 89, 00:19:19.530 "qid": 0, 00:19:19.530 "state": "enabled", 00:19:19.530 "thread": "nvmf_tgt_poll_group_000", 00:19:19.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:19.530 "listen_address": { 00:19:19.530 "trtype": "TCP", 00:19:19.530 "adrfam": "IPv4", 00:19:19.530 "traddr": "10.0.0.2", 00:19:19.530 "trsvcid": "4420" 00:19:19.530 }, 00:19:19.530 "peer_address": { 00:19:19.530 "trtype": "TCP", 00:19:19.530 "adrfam": "IPv4", 00:19:19.530 "traddr": "10.0.0.1", 00:19:19.530 "trsvcid": "56090" 00:19:19.530 }, 00:19:19.530 "auth": { 00:19:19.530 "state": "completed", 00:19:19.530 "digest": "sha384", 00:19:19.530 "dhgroup": "ffdhe8192" 00:19:19.530 } 00:19:19.530 } 00:19:19.530 ]' 00:19:19.530 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:19.530 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:19.530 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:19.530 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:19.530 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:19.530 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:19.530 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:19.530 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:19.789 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:19:19.789 08:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:19:20.358 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:20.358 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:20.358 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:20.358 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.358 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.358 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.358 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:20.358 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:20.358 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:20.617 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:19:20.617 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:20.617 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:20.617 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:20.617 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:20.617 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:20.617 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.617 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:20.617 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.617 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:20.617 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.617 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:20.617 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:21.185 00:19:21.185 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:21.185 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:21.185 08:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:21.185 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:21.185 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:21.185 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:21.185 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.185 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:21.185 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:21.185 { 00:19:21.185 "cntlid": 91, 00:19:21.185 "qid": 0, 00:19:21.185 "state": "enabled", 00:19:21.185 "thread": "nvmf_tgt_poll_group_000", 00:19:21.185 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:21.185 "listen_address": { 00:19:21.185 "trtype": "TCP", 00:19:21.185 "adrfam": "IPv4", 00:19:21.185 "traddr": "10.0.0.2", 00:19:21.185 "trsvcid": "4420" 00:19:21.185 }, 00:19:21.185 "peer_address": { 00:19:21.185 "trtype": "TCP", 00:19:21.185 "adrfam": "IPv4", 00:19:21.185 "traddr": "10.0.0.1", 00:19:21.185 "trsvcid": "56128" 00:19:21.185 }, 00:19:21.185 "auth": { 00:19:21.185 "state": "completed", 00:19:21.185 "digest": "sha384", 00:19:21.185 "dhgroup": "ffdhe8192" 00:19:21.185 } 00:19:21.185 } 00:19:21.185 ]' 00:19:21.185 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:21.185 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:21.185 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:21.443 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:21.443 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:21.443 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:21.443 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:21.443 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:21.701 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:19:21.701 08:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:22.270 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.270 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:22.839 00:19:22.839 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:22.839 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:22.839 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:23.098 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:23.098 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:23.098 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.098 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.098 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.098 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:23.098 { 00:19:23.098 "cntlid": 93, 00:19:23.098 "qid": 0, 00:19:23.098 "state": "enabled", 00:19:23.098 "thread": "nvmf_tgt_poll_group_000", 00:19:23.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:23.098 "listen_address": { 00:19:23.098 "trtype": "TCP", 00:19:23.098 "adrfam": "IPv4", 00:19:23.098 "traddr": "10.0.0.2", 00:19:23.098 "trsvcid": "4420" 00:19:23.098 }, 00:19:23.098 "peer_address": { 00:19:23.098 "trtype": "TCP", 00:19:23.098 "adrfam": "IPv4", 00:19:23.098 "traddr": "10.0.0.1", 00:19:23.098 "trsvcid": "56150" 00:19:23.098 }, 00:19:23.098 "auth": { 00:19:23.098 "state": "completed", 00:19:23.098 "digest": "sha384", 00:19:23.098 "dhgroup": "ffdhe8192" 00:19:23.098 } 00:19:23.098 } 00:19:23.098 ]' 00:19:23.098 08:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:23.098 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:23.098 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:23.098 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:23.098 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:23.098 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:23.098 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:23.098 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.357 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:19:23.357 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:19:23.925 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:23.925 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:23.925 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:23.925 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.925 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:23.925 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.925 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:23.925 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:23.925 08:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:19:24.184 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:19:24.184 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:24.184 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:19:24.184 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:24.184 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:24.184 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.184 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:24.184 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.184 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.184 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.184 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:24.184 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:24.184 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:24.752 00:19:24.752 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:24.752 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:24.752 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.752 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.752 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.752 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.752 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.752 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.752 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:24.753 { 00:19:24.753 "cntlid": 95, 00:19:24.753 "qid": 0, 00:19:24.753 "state": "enabled", 00:19:24.753 "thread": "nvmf_tgt_poll_group_000", 00:19:24.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:24.753 "listen_address": { 00:19:24.753 "trtype": "TCP", 00:19:24.753 "adrfam": "IPv4", 00:19:24.753 "traddr": "10.0.0.2", 00:19:24.753 "trsvcid": "4420" 00:19:24.753 }, 00:19:24.753 "peer_address": { 00:19:24.753 "trtype": "TCP", 00:19:24.753 "adrfam": "IPv4", 00:19:24.753 "traddr": "10.0.0.1", 00:19:24.753 "trsvcid": "56168" 00:19:24.753 }, 00:19:24.753 "auth": { 00:19:24.753 "state": "completed", 00:19:24.753 "digest": "sha384", 00:19:24.753 "dhgroup": "ffdhe8192" 00:19:24.753 } 00:19:24.753 } 00:19:24.753 ]' 00:19:24.753 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:25.011 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:19:25.011 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:25.011 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:25.011 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:25.011 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.011 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.011 08:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.270 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:19:25.270 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:25.838 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:25.838 08:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:26.098 00:19:26.098 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:26.098 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:26.098 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.358 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:26.358 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:26.358 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:26.358 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.358 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:26.358 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:26.358 { 00:19:26.358 "cntlid": 97, 00:19:26.358 "qid": 0, 00:19:26.358 "state": "enabled", 00:19:26.358 "thread": "nvmf_tgt_poll_group_000", 00:19:26.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:26.358 "listen_address": { 00:19:26.358 "trtype": "TCP", 00:19:26.358 "adrfam": "IPv4", 00:19:26.358 "traddr": "10.0.0.2", 00:19:26.358 "trsvcid": "4420" 00:19:26.358 }, 00:19:26.358 "peer_address": { 00:19:26.358 "trtype": "TCP", 00:19:26.358 "adrfam": "IPv4", 00:19:26.358 "traddr": "10.0.0.1", 00:19:26.358 "trsvcid": "49864" 00:19:26.358 }, 00:19:26.358 "auth": { 00:19:26.358 "state": "completed", 00:19:26.358 "digest": "sha512", 00:19:26.358 "dhgroup": "null" 00:19:26.358 } 00:19:26.358 } 00:19:26.358 ]' 00:19:26.358 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:26.358 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:26.358 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:26.358 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:26.358 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:26.617 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:26.617 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:26.617 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:26.617 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:19:26.617 08:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:19:27.194 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:27.194 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:27.194 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:27.195 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.195 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.195 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.195 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:27.195 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:27.195 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:27.453 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:19:27.453 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:27.453 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:27.453 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:27.453 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:27.453 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:27.453 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.453 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.453 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.453 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.453 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.453 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.453 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:27.711 00:19:27.711 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:27.711 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:27.711 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:27.968 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.968 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.968 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:27.968 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.968 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:27.968 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:27.968 { 00:19:27.968 "cntlid": 99, 00:19:27.968 "qid": 0, 00:19:27.968 "state": "enabled", 00:19:27.968 "thread": "nvmf_tgt_poll_group_000", 00:19:27.968 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:27.968 "listen_address": { 00:19:27.968 "trtype": "TCP", 00:19:27.968 "adrfam": "IPv4", 00:19:27.968 "traddr": "10.0.0.2", 00:19:27.968 "trsvcid": "4420" 00:19:27.968 }, 00:19:27.968 "peer_address": { 00:19:27.968 "trtype": "TCP", 00:19:27.968 "adrfam": "IPv4", 00:19:27.968 "traddr": "10.0.0.1", 00:19:27.968 "trsvcid": "49888" 00:19:27.968 }, 00:19:27.968 "auth": { 00:19:27.968 "state": "completed", 00:19:27.968 "digest": "sha512", 00:19:27.968 "dhgroup": "null" 00:19:27.968 } 00:19:27.968 } 00:19:27.968 ]' 00:19:27.968 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:27.968 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:27.968 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:27.968 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:27.968 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:27.968 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.968 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.968 08:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:28.226 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:19:28.226 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:19:28.794 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.794 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:28.794 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.794 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.794 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.794 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:28.794 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:28.794 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:29.052 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:19:29.052 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:29.052 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:29.052 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:29.052 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:29.052 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:29.052 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.052 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.052 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.052 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.052 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.052 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.052 08:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:29.310 00:19:29.310 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:29.310 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.310 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:29.569 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.569 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.569 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.569 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.569 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.569 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:29.569 { 00:19:29.569 "cntlid": 101, 00:19:29.569 "qid": 0, 00:19:29.569 "state": "enabled", 00:19:29.569 "thread": "nvmf_tgt_poll_group_000", 00:19:29.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:29.569 "listen_address": { 00:19:29.569 "trtype": "TCP", 00:19:29.569 "adrfam": "IPv4", 00:19:29.569 "traddr": "10.0.0.2", 00:19:29.569 "trsvcid": "4420" 00:19:29.569 }, 00:19:29.569 "peer_address": { 00:19:29.569 "trtype": "TCP", 00:19:29.569 "adrfam": "IPv4", 00:19:29.569 "traddr": "10.0.0.1", 00:19:29.569 "trsvcid": "49928" 00:19:29.569 }, 00:19:29.569 "auth": { 00:19:29.569 "state": "completed", 00:19:29.569 "digest": "sha512", 00:19:29.569 "dhgroup": "null" 00:19:29.569 } 00:19:29.569 } 00:19:29.569 ]' 00:19:29.569 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:29.569 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:29.569 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:29.569 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:29.569 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:29.569 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.569 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.569 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.828 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:19:29.828 08:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:19:30.394 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.394 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:30.394 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.394 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.394 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.394 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:30.394 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:30.394 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:19:30.652 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:19:30.652 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:30.652 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:30.652 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:19:30.652 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:30.652 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:30.652 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:30.652 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.652 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.652 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.652 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:30.652 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:30.652 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:30.909 00:19:30.909 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:30.909 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:30.909 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:30.909 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:30.909 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:30.909 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.909 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.909 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.909 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:30.909 { 00:19:30.909 "cntlid": 103, 00:19:30.909 "qid": 0, 00:19:30.909 "state": "enabled", 00:19:30.909 "thread": "nvmf_tgt_poll_group_000", 00:19:30.909 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:30.909 "listen_address": { 00:19:30.909 "trtype": "TCP", 00:19:30.909 "adrfam": "IPv4", 00:19:30.909 "traddr": "10.0.0.2", 00:19:30.909 "trsvcid": "4420" 00:19:30.909 }, 00:19:30.909 "peer_address": { 00:19:30.909 "trtype": "TCP", 00:19:30.909 "adrfam": "IPv4", 00:19:30.909 "traddr": "10.0.0.1", 00:19:30.909 "trsvcid": "49940" 00:19:30.909 }, 00:19:30.909 "auth": { 00:19:30.909 "state": "completed", 00:19:30.909 "digest": "sha512", 00:19:30.909 "dhgroup": "null" 00:19:30.909 } 00:19:30.909 } 00:19:30.909 ]' 00:19:30.909 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:31.168 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:31.168 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:31.168 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:19:31.168 08:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:31.168 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.168 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.168 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:31.426 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:19:31.426 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:19:31.994 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:31.994 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:31.994 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:31.994 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.994 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.994 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.994 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:31.994 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:31.994 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:31.994 08:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:31.994 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:19:31.994 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:31.994 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:31.994 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:31.994 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:31.994 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.994 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.994 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:31.994 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.994 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:31.994 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.994 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.994 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:32.253 00:19:32.253 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:32.512 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:32.512 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:32.512 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:32.512 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:32.512 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:32.512 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:32.512 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:32.512 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:32.512 { 00:19:32.512 "cntlid": 105, 00:19:32.512 "qid": 0, 00:19:32.512 "state": "enabled", 00:19:32.512 "thread": "nvmf_tgt_poll_group_000", 00:19:32.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:32.512 "listen_address": { 00:19:32.512 "trtype": "TCP", 00:19:32.512 "adrfam": "IPv4", 00:19:32.512 "traddr": "10.0.0.2", 00:19:32.512 "trsvcid": "4420" 00:19:32.512 }, 00:19:32.512 "peer_address": { 00:19:32.512 "trtype": "TCP", 00:19:32.512 "adrfam": "IPv4", 00:19:32.512 "traddr": "10.0.0.1", 00:19:32.512 "trsvcid": "49968" 00:19:32.512 }, 00:19:32.512 "auth": { 00:19:32.512 "state": "completed", 00:19:32.512 "digest": "sha512", 00:19:32.512 "dhgroup": "ffdhe2048" 00:19:32.512 } 00:19:32.512 } 00:19:32.512 ]' 00:19:32.512 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:32.512 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:32.512 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:32.771 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:32.771 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:32.771 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:32.771 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:32.771 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:33.030 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:19:33.030 08:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:19:33.597 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.597 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.597 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:33.597 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.597 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.597 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.597 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:33.597 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:33.598 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:33.598 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:19:33.598 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:33.598 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:33.598 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:33.598 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:33.598 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.598 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.598 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:33.598 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.598 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:33.598 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.598 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.598 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.906 00:19:33.906 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:33.906 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:33.906 08:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:34.221 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:34.221 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:34.221 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:34.221 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:34.221 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:34.221 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:34.221 { 00:19:34.221 "cntlid": 107, 00:19:34.221 "qid": 0, 00:19:34.221 "state": "enabled", 00:19:34.221 "thread": "nvmf_tgt_poll_group_000", 00:19:34.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:34.221 "listen_address": { 00:19:34.221 "trtype": "TCP", 00:19:34.221 "adrfam": "IPv4", 00:19:34.221 "traddr": "10.0.0.2", 00:19:34.221 "trsvcid": "4420" 00:19:34.221 }, 00:19:34.221 "peer_address": { 00:19:34.221 "trtype": "TCP", 00:19:34.221 "adrfam": "IPv4", 00:19:34.221 "traddr": "10.0.0.1", 00:19:34.221 "trsvcid": "49978" 00:19:34.221 }, 00:19:34.221 "auth": { 00:19:34.221 "state": "completed", 00:19:34.221 "digest": "sha512", 00:19:34.221 "dhgroup": "ffdhe2048" 00:19:34.221 } 00:19:34.221 } 00:19:34.221 ]' 00:19:34.222 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:34.222 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:34.222 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:34.222 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:34.222 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:34.222 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.222 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.222 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.481 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:19:34.481 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:19:35.049 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.049 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:35.049 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.049 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.049 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.049 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:35.049 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:35.049 08:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:35.308 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:19:35.308 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:35.308 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:35.308 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:35.308 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:35.308 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.308 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.308 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.308 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.308 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.308 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.308 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.308 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.566 00:19:35.566 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:35.566 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:35.566 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:35.825 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:35.825 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:35.825 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:35.825 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.825 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:35.825 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:35.825 { 00:19:35.825 "cntlid": 109, 00:19:35.825 "qid": 0, 00:19:35.825 "state": "enabled", 00:19:35.825 "thread": "nvmf_tgt_poll_group_000", 00:19:35.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:35.825 "listen_address": { 00:19:35.825 "trtype": "TCP", 00:19:35.825 "adrfam": "IPv4", 00:19:35.825 "traddr": "10.0.0.2", 00:19:35.825 "trsvcid": "4420" 00:19:35.825 }, 00:19:35.825 "peer_address": { 00:19:35.825 "trtype": "TCP", 00:19:35.825 "adrfam": "IPv4", 00:19:35.825 "traddr": "10.0.0.1", 00:19:35.825 "trsvcid": "50002" 00:19:35.825 }, 00:19:35.825 "auth": { 00:19:35.825 "state": "completed", 00:19:35.825 "digest": "sha512", 00:19:35.825 "dhgroup": "ffdhe2048" 00:19:35.825 } 00:19:35.825 } 00:19:35.825 ]' 00:19:35.825 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:35.825 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:35.825 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:35.825 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:35.825 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:35.825 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:35.825 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:35.825 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.084 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:19:36.084 08:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:19:36.651 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:36.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:36.651 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:36.651 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.651 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.651 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.651 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:36.651 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:36.651 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:19:36.910 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:19:36.910 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:36.910 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:36.910 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:19:36.910 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:36.910 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:36.910 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:36.910 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:36.910 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.910 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:36.910 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:36.910 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:36.910 08:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:37.169 00:19:37.169 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:37.169 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:37.169 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:37.431 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:37.431 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:37.431 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:37.431 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.431 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:37.431 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:37.431 { 00:19:37.431 "cntlid": 111, 00:19:37.431 "qid": 0, 00:19:37.431 "state": "enabled", 00:19:37.431 "thread": "nvmf_tgt_poll_group_000", 00:19:37.431 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:37.431 "listen_address": { 00:19:37.431 "trtype": "TCP", 00:19:37.431 "adrfam": "IPv4", 00:19:37.431 "traddr": "10.0.0.2", 00:19:37.431 "trsvcid": "4420" 00:19:37.431 }, 00:19:37.431 "peer_address": { 00:19:37.431 "trtype": "TCP", 00:19:37.431 "adrfam": "IPv4", 00:19:37.431 "traddr": "10.0.0.1", 00:19:37.431 "trsvcid": "35562" 00:19:37.431 }, 00:19:37.431 "auth": { 00:19:37.431 "state": "completed", 00:19:37.431 "digest": "sha512", 00:19:37.431 "dhgroup": "ffdhe2048" 00:19:37.431 } 00:19:37.431 } 00:19:37.431 ]' 00:19:37.431 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:37.431 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:37.431 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:37.432 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:37.432 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:37.432 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:37.432 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:37.432 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:37.691 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:19:37.691 08:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:19:38.259 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:38.259 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:38.259 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:38.259 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.259 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.259 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.259 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:38.259 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:38.259 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:38.259 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:38.518 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:19:38.518 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:38.518 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:38.518 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:38.518 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:38.518 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:38.518 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.518 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.518 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.518 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.518 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.518 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.518 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:38.777 00:19:38.777 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:38.777 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.777 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:38.777 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.777 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.777 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:38.777 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.777 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:38.777 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:38.777 { 00:19:38.777 "cntlid": 113, 00:19:38.777 "qid": 0, 00:19:38.777 "state": "enabled", 00:19:38.777 "thread": "nvmf_tgt_poll_group_000", 00:19:38.777 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:38.777 "listen_address": { 00:19:38.777 "trtype": "TCP", 00:19:38.777 "adrfam": "IPv4", 00:19:38.777 "traddr": "10.0.0.2", 00:19:38.777 "trsvcid": "4420" 00:19:38.777 }, 00:19:38.777 "peer_address": { 00:19:38.777 "trtype": "TCP", 00:19:38.777 "adrfam": "IPv4", 00:19:38.777 "traddr": "10.0.0.1", 00:19:38.777 "trsvcid": "35596" 00:19:38.777 }, 00:19:38.777 "auth": { 00:19:38.777 "state": "completed", 00:19:38.777 "digest": "sha512", 00:19:38.777 "dhgroup": "ffdhe3072" 00:19:38.777 } 00:19:38.777 } 00:19:38.777 ]' 00:19:39.036 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:39.036 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:39.036 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:39.036 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:39.036 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:39.036 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:39.036 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:39.036 08:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:39.295 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:19:39.295 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:39.863 08:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:40.122 00:19:40.381 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:40.381 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:40.381 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.381 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.381 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.381 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:40.381 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.381 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:40.381 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:40.381 { 00:19:40.381 "cntlid": 115, 00:19:40.381 "qid": 0, 00:19:40.381 "state": "enabled", 00:19:40.381 "thread": "nvmf_tgt_poll_group_000", 00:19:40.381 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:40.381 "listen_address": { 00:19:40.381 "trtype": "TCP", 00:19:40.381 "adrfam": "IPv4", 00:19:40.381 "traddr": "10.0.0.2", 00:19:40.381 "trsvcid": "4420" 00:19:40.381 }, 00:19:40.381 "peer_address": { 00:19:40.381 "trtype": "TCP", 00:19:40.381 "adrfam": "IPv4", 00:19:40.381 "traddr": "10.0.0.1", 00:19:40.381 "trsvcid": "35614" 00:19:40.381 }, 00:19:40.381 "auth": { 00:19:40.381 "state": "completed", 00:19:40.381 "digest": "sha512", 00:19:40.381 "dhgroup": "ffdhe3072" 00:19:40.381 } 00:19:40.381 } 00:19:40.381 ]' 00:19:40.381 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:40.381 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:40.640 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:40.640 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:40.640 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:40.640 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.640 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.640 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.900 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:19:40.900 08:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:19:41.468 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.468 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:41.468 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.468 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.468 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.468 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:41.468 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:41.468 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:41.727 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:19:41.727 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:41.727 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:41.727 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:41.727 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:41.727 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.727 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.727 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.727 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.727 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:41.727 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.727 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.727 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:41.986 00:19:41.986 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:41.986 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:41.986 08:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:41.987 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:41.987 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:41.987 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:41.987 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.246 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:42.246 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:42.246 { 00:19:42.246 "cntlid": 117, 00:19:42.246 "qid": 0, 00:19:42.246 "state": "enabled", 00:19:42.246 "thread": "nvmf_tgt_poll_group_000", 00:19:42.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:42.246 "listen_address": { 00:19:42.246 "trtype": "TCP", 00:19:42.246 "adrfam": "IPv4", 00:19:42.246 "traddr": "10.0.0.2", 00:19:42.246 "trsvcid": "4420" 00:19:42.246 }, 00:19:42.246 "peer_address": { 00:19:42.246 "trtype": "TCP", 00:19:42.246 "adrfam": "IPv4", 00:19:42.246 "traddr": "10.0.0.1", 00:19:42.246 "trsvcid": "35642" 00:19:42.246 }, 00:19:42.246 "auth": { 00:19:42.246 "state": "completed", 00:19:42.246 "digest": "sha512", 00:19:42.246 "dhgroup": "ffdhe3072" 00:19:42.246 } 00:19:42.246 } 00:19:42.246 ]' 00:19:42.246 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:42.246 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:42.246 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:42.246 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:42.246 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:42.246 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.246 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.246 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:42.505 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:19:42.505 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:19:43.072 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.072 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.072 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:43.072 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.072 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.072 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.072 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:43.072 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:43.072 08:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:19:43.331 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:19:43.331 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:43.331 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:43.331 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:19:43.331 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:43.331 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:43.331 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:43.331 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.331 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.331 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.331 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:43.331 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:43.331 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:43.590 00:19:43.590 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:43.590 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:43.590 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:43.590 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:43.590 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:43.590 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:43.590 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.590 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:43.590 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:43.590 { 00:19:43.590 "cntlid": 119, 00:19:43.590 "qid": 0, 00:19:43.590 "state": "enabled", 00:19:43.590 "thread": "nvmf_tgt_poll_group_000", 00:19:43.590 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:43.590 "listen_address": { 00:19:43.590 "trtype": "TCP", 00:19:43.590 "adrfam": "IPv4", 00:19:43.590 "traddr": "10.0.0.2", 00:19:43.590 "trsvcid": "4420" 00:19:43.590 }, 00:19:43.590 "peer_address": { 00:19:43.590 "trtype": "TCP", 00:19:43.590 "adrfam": "IPv4", 00:19:43.590 "traddr": "10.0.0.1", 00:19:43.590 "trsvcid": "35676" 00:19:43.590 }, 00:19:43.590 "auth": { 00:19:43.590 "state": "completed", 00:19:43.590 "digest": "sha512", 00:19:43.590 "dhgroup": "ffdhe3072" 00:19:43.590 } 00:19:43.590 } 00:19:43.590 ]' 00:19:43.590 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:43.849 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:43.849 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:43.849 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:43.849 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:43.849 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:43.849 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:43.849 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:44.107 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:19:44.107 08:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:19:44.675 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:44.675 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:44.675 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:44.675 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.675 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.675 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.675 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:44.675 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:44.675 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:44.675 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:44.935 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:19:44.935 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:44.935 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:44.935 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:44.935 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:44.935 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.935 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.935 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:44.935 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.935 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:44.935 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.935 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:44.935 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:45.194 00:19:45.194 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:45.194 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:45.194 08:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:45.194 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:45.194 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:45.194 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:45.194 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:45.194 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:45.194 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:45.194 { 00:19:45.194 "cntlid": 121, 00:19:45.194 "qid": 0, 00:19:45.194 "state": "enabled", 00:19:45.194 "thread": "nvmf_tgt_poll_group_000", 00:19:45.194 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:45.194 "listen_address": { 00:19:45.194 "trtype": "TCP", 00:19:45.194 "adrfam": "IPv4", 00:19:45.194 "traddr": "10.0.0.2", 00:19:45.194 "trsvcid": "4420" 00:19:45.194 }, 00:19:45.194 "peer_address": { 00:19:45.194 "trtype": "TCP", 00:19:45.194 "adrfam": "IPv4", 00:19:45.194 "traddr": "10.0.0.1", 00:19:45.194 "trsvcid": "35700" 00:19:45.194 }, 00:19:45.194 "auth": { 00:19:45.194 "state": "completed", 00:19:45.194 "digest": "sha512", 00:19:45.194 "dhgroup": "ffdhe4096" 00:19:45.194 } 00:19:45.194 } 00:19:45.194 ]' 00:19:45.194 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:45.452 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:45.452 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:45.452 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:45.452 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:45.452 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.452 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.452 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.711 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:19:45.711 08:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:19:46.278 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.278 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.278 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:46.278 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.278 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.278 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.278 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:46.278 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:46.278 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:46.536 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:19:46.536 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:46.536 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:46.536 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:46.536 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:46.536 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.536 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.536 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.536 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.536 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.536 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.536 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.536 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:46.794 00:19:46.794 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:46.794 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:46.794 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.053 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.053 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.053 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.053 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.053 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.053 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:47.053 { 00:19:47.053 "cntlid": 123, 00:19:47.053 "qid": 0, 00:19:47.053 "state": "enabled", 00:19:47.053 "thread": "nvmf_tgt_poll_group_000", 00:19:47.053 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:47.053 "listen_address": { 00:19:47.053 "trtype": "TCP", 00:19:47.053 "adrfam": "IPv4", 00:19:47.053 "traddr": "10.0.0.2", 00:19:47.053 "trsvcid": "4420" 00:19:47.053 }, 00:19:47.053 "peer_address": { 00:19:47.053 "trtype": "TCP", 00:19:47.053 "adrfam": "IPv4", 00:19:47.053 "traddr": "10.0.0.1", 00:19:47.053 "trsvcid": "57652" 00:19:47.053 }, 00:19:47.053 "auth": { 00:19:47.053 "state": "completed", 00:19:47.053 "digest": "sha512", 00:19:47.053 "dhgroup": "ffdhe4096" 00:19:47.053 } 00:19:47.053 } 00:19:47.053 ]' 00:19:47.053 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:47.053 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:47.053 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:47.053 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:47.053 08:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:47.053 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.053 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.053 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.312 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:19:47.312 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:19:47.880 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:47.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:47.880 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:47.880 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.880 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.880 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.880 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:47.880 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:47.880 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:48.138 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:19:48.138 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:48.138 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:48.138 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:48.138 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:48.138 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:48.138 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.138 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.138 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.138 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.138 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.138 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.139 08:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:48.397 00:19:48.397 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:48.397 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:48.397 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:48.656 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:48.656 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:48.656 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:48.656 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.656 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:48.656 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:48.656 { 00:19:48.656 "cntlid": 125, 00:19:48.656 "qid": 0, 00:19:48.656 "state": "enabled", 00:19:48.656 "thread": "nvmf_tgt_poll_group_000", 00:19:48.656 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:48.656 "listen_address": { 00:19:48.656 "trtype": "TCP", 00:19:48.656 "adrfam": "IPv4", 00:19:48.656 "traddr": "10.0.0.2", 00:19:48.656 "trsvcid": "4420" 00:19:48.656 }, 00:19:48.656 "peer_address": { 00:19:48.656 "trtype": "TCP", 00:19:48.656 "adrfam": "IPv4", 00:19:48.656 "traddr": "10.0.0.1", 00:19:48.656 "trsvcid": "57676" 00:19:48.656 }, 00:19:48.656 "auth": { 00:19:48.656 "state": "completed", 00:19:48.656 "digest": "sha512", 00:19:48.656 "dhgroup": "ffdhe4096" 00:19:48.656 } 00:19:48.656 } 00:19:48.656 ]' 00:19:48.656 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:48.656 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:48.656 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:48.656 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:48.656 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:48.656 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:48.656 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:48.656 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:48.915 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:19:48.915 08:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:19:49.480 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:49.480 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:49.480 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:49.480 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.480 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.480 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.481 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:49.481 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:49.481 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:19:49.739 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:19:49.739 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:49.739 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:49.739 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:19:49.739 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:49.739 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.739 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:49.739 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.739 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.739 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.739 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:49.739 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:49.739 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:49.998 00:19:49.998 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:49.998 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:49.998 08:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:50.257 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:50.257 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:50.257 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:50.257 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.257 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:50.257 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:50.257 { 00:19:50.257 "cntlid": 127, 00:19:50.257 "qid": 0, 00:19:50.257 "state": "enabled", 00:19:50.257 "thread": "nvmf_tgt_poll_group_000", 00:19:50.257 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:50.257 "listen_address": { 00:19:50.257 "trtype": "TCP", 00:19:50.257 "adrfam": "IPv4", 00:19:50.257 "traddr": "10.0.0.2", 00:19:50.257 "trsvcid": "4420" 00:19:50.257 }, 00:19:50.257 "peer_address": { 00:19:50.257 "trtype": "TCP", 00:19:50.257 "adrfam": "IPv4", 00:19:50.257 "traddr": "10.0.0.1", 00:19:50.257 "trsvcid": "57704" 00:19:50.257 }, 00:19:50.257 "auth": { 00:19:50.257 "state": "completed", 00:19:50.257 "digest": "sha512", 00:19:50.257 "dhgroup": "ffdhe4096" 00:19:50.257 } 00:19:50.257 } 00:19:50.257 ]' 00:19:50.257 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:50.257 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:50.257 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:50.257 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:50.257 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:50.257 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:50.257 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:50.257 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:50.515 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:19:50.515 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:19:51.082 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:51.082 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:51.082 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:51.082 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.082 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.082 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.082 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:51.082 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:51.082 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:51.082 08:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:51.341 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:19:51.341 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:51.341 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:51.341 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:51.341 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:51.341 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.341 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.341 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.341 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.341 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.342 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.342 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.342 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:51.600 00:19:51.600 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:51.600 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.600 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:51.859 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.859 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.859 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:51.859 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.859 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:51.859 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:51.859 { 00:19:51.859 "cntlid": 129, 00:19:51.859 "qid": 0, 00:19:51.859 "state": "enabled", 00:19:51.859 "thread": "nvmf_tgt_poll_group_000", 00:19:51.859 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:51.859 "listen_address": { 00:19:51.859 "trtype": "TCP", 00:19:51.859 "adrfam": "IPv4", 00:19:51.859 "traddr": "10.0.0.2", 00:19:51.859 "trsvcid": "4420" 00:19:51.859 }, 00:19:51.859 "peer_address": { 00:19:51.859 "trtype": "TCP", 00:19:51.859 "adrfam": "IPv4", 00:19:51.859 "traddr": "10.0.0.1", 00:19:51.859 "trsvcid": "57744" 00:19:51.859 }, 00:19:51.859 "auth": { 00:19:51.859 "state": "completed", 00:19:51.859 "digest": "sha512", 00:19:51.859 "dhgroup": "ffdhe6144" 00:19:51.859 } 00:19:51.859 } 00:19:51.859 ]' 00:19:51.859 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:51.859 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:51.859 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:51.859 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:51.859 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:51.859 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.859 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.859 08:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.116 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:19:52.116 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:19:52.682 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:52.682 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:52.682 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:52.682 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.682 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.682 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.682 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:52.682 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:52.682 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:52.941 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:19:52.941 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:52.941 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:52.941 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:52.941 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:52.941 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:52.941 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.941 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:52.941 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:52.941 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:52.941 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.942 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:52.942 08:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:53.200 00:19:53.200 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:53.200 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:53.200 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:53.459 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:53.459 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:53.459 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:53.459 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.459 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:53.459 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:53.459 { 00:19:53.459 "cntlid": 131, 00:19:53.459 "qid": 0, 00:19:53.459 "state": "enabled", 00:19:53.459 "thread": "nvmf_tgt_poll_group_000", 00:19:53.459 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:53.459 "listen_address": { 00:19:53.459 "trtype": "TCP", 00:19:53.459 "adrfam": "IPv4", 00:19:53.459 "traddr": "10.0.0.2", 00:19:53.459 "trsvcid": "4420" 00:19:53.459 }, 00:19:53.459 "peer_address": { 00:19:53.459 "trtype": "TCP", 00:19:53.459 "adrfam": "IPv4", 00:19:53.459 "traddr": "10.0.0.1", 00:19:53.459 "trsvcid": "57776" 00:19:53.459 }, 00:19:53.459 "auth": { 00:19:53.459 "state": "completed", 00:19:53.459 "digest": "sha512", 00:19:53.459 "dhgroup": "ffdhe6144" 00:19:53.459 } 00:19:53.459 } 00:19:53.459 ]' 00:19:53.459 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:53.459 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:53.459 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:53.459 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:53.459 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:53.459 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:53.460 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:53.460 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:53.719 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:19:53.719 08:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:19:54.286 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:54.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:54.286 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:54.286 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.286 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.286 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.286 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:54.286 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:54.286 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:54.544 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:19:54.544 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:54.544 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:54.544 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:54.544 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:19:54.544 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:54.544 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.544 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:54.544 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.544 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:54.544 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.544 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.544 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:54.802 00:19:54.802 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:54.802 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.802 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:55.060 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:55.060 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:55.060 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:55.060 08:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.060 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:55.060 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:55.060 { 00:19:55.060 "cntlid": 133, 00:19:55.060 "qid": 0, 00:19:55.060 "state": "enabled", 00:19:55.060 "thread": "nvmf_tgt_poll_group_000", 00:19:55.060 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:55.060 "listen_address": { 00:19:55.060 "trtype": "TCP", 00:19:55.060 "adrfam": "IPv4", 00:19:55.060 "traddr": "10.0.0.2", 00:19:55.060 "trsvcid": "4420" 00:19:55.060 }, 00:19:55.060 "peer_address": { 00:19:55.060 "trtype": "TCP", 00:19:55.060 "adrfam": "IPv4", 00:19:55.060 "traddr": "10.0.0.1", 00:19:55.060 "trsvcid": "57804" 00:19:55.060 }, 00:19:55.060 "auth": { 00:19:55.060 "state": "completed", 00:19:55.060 "digest": "sha512", 00:19:55.060 "dhgroup": "ffdhe6144" 00:19:55.060 } 00:19:55.060 } 00:19:55.060 ]' 00:19:55.060 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:55.060 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:55.060 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:55.318 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:55.318 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:55.318 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:55.318 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:55.318 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:55.318 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:19:55.318 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:19:55.883 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:56.141 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:56.141 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:56.141 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.141 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.141 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.141 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:56.141 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:56.141 08:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:19:56.141 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:19:56.141 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:56.141 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:56.141 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:19:56.141 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:19:56.141 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:56.141 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:19:56.141 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.141 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.141 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.141 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:19:56.141 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.141 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:19:56.708 00:19:56.708 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:56.708 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:56.708 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.708 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.708 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.708 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:56.708 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.708 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:56.708 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:56.708 { 00:19:56.708 "cntlid": 135, 00:19:56.708 "qid": 0, 00:19:56.708 "state": "enabled", 00:19:56.708 "thread": "nvmf_tgt_poll_group_000", 00:19:56.708 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:56.708 "listen_address": { 00:19:56.708 "trtype": "TCP", 00:19:56.708 "adrfam": "IPv4", 00:19:56.708 "traddr": "10.0.0.2", 00:19:56.708 "trsvcid": "4420" 00:19:56.708 }, 00:19:56.708 "peer_address": { 00:19:56.708 "trtype": "TCP", 00:19:56.708 "adrfam": "IPv4", 00:19:56.708 "traddr": "10.0.0.1", 00:19:56.708 "trsvcid": "48996" 00:19:56.708 }, 00:19:56.708 "auth": { 00:19:56.708 "state": "completed", 00:19:56.708 "digest": "sha512", 00:19:56.708 "dhgroup": "ffdhe6144" 00:19:56.708 } 00:19:56.708 } 00:19:56.708 ]' 00:19:56.708 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:56.967 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:56.967 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:56.967 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:56.967 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:56.967 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.967 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.967 08:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:57.226 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:19:57.226 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:19:57.794 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.794 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.794 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:57.794 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:57.794 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.794 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:57.794 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.794 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:57.794 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:57.794 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:58.053 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:19:58.053 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:58.053 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:58.053 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:58.053 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:19:58.053 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:58.053 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.053 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.053 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.053 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.053 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.053 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.053 08:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.311 00:19:58.311 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:19:58.311 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:19:58.311 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.569 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.569 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.569 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:58.569 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.569 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:58.569 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:19:58.569 { 00:19:58.569 "cntlid": 137, 00:19:58.569 "qid": 0, 00:19:58.569 "state": "enabled", 00:19:58.569 "thread": "nvmf_tgt_poll_group_000", 00:19:58.569 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:19:58.569 "listen_address": { 00:19:58.569 "trtype": "TCP", 00:19:58.569 "adrfam": "IPv4", 00:19:58.569 "traddr": "10.0.0.2", 00:19:58.569 "trsvcid": "4420" 00:19:58.569 }, 00:19:58.569 "peer_address": { 00:19:58.569 "trtype": "TCP", 00:19:58.569 "adrfam": "IPv4", 00:19:58.569 "traddr": "10.0.0.1", 00:19:58.569 "trsvcid": "49016" 00:19:58.569 }, 00:19:58.569 "auth": { 00:19:58.569 "state": "completed", 00:19:58.569 "digest": "sha512", 00:19:58.569 "dhgroup": "ffdhe8192" 00:19:58.569 } 00:19:58.569 } 00:19:58.569 ]' 00:19:58.569 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:19:58.569 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:19:58.569 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:19:58.827 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:19:58.827 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:19:58.827 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.827 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.827 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:58.827 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:19:58.827 08:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:19:59.394 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:59.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:59.394 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:59.394 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.394 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.394 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.394 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:19:59.394 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:59.395 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:19:59.653 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:19:59.653 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:19:59.653 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:19:59.653 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:19:59.653 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:19:59.653 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:59.653 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.653 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:59.653 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:59.653 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:59.653 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.653 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:59.653 08:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.221 00:20:00.221 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:00.221 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:00.221 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:00.480 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:00.480 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:00.480 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:00.480 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.480 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:00.480 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:00.480 { 00:20:00.480 "cntlid": 139, 00:20:00.480 "qid": 0, 00:20:00.480 "state": "enabled", 00:20:00.480 "thread": "nvmf_tgt_poll_group_000", 00:20:00.480 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:00.480 "listen_address": { 00:20:00.480 "trtype": "TCP", 00:20:00.480 "adrfam": "IPv4", 00:20:00.480 "traddr": "10.0.0.2", 00:20:00.480 "trsvcid": "4420" 00:20:00.480 }, 00:20:00.480 "peer_address": { 00:20:00.480 "trtype": "TCP", 00:20:00.480 "adrfam": "IPv4", 00:20:00.480 "traddr": "10.0.0.1", 00:20:00.480 "trsvcid": "49034" 00:20:00.480 }, 00:20:00.480 "auth": { 00:20:00.480 "state": "completed", 00:20:00.480 "digest": "sha512", 00:20:00.480 "dhgroup": "ffdhe8192" 00:20:00.480 } 00:20:00.480 } 00:20:00.480 ]' 00:20:00.480 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:00.480 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:00.480 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:00.480 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:00.480 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:00.480 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:00.480 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:00.480 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:00.739 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:20:00.739 08:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: --dhchap-ctrl-secret DHHC-1:02:ZjJmNWVlZDdlYWQyNGM1Nzc1MDU4YmJlYThiZmQ5MDhiYzE4YTk4YTZkMjMyN2I5/E1eMA==: 00:20:01.305 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:01.305 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:01.305 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:01.305 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.305 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.305 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.305 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:01.305 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:01.305 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:01.563 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:20:01.563 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:01.563 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:01.563 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:01.563 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:01.563 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:01.563 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.563 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.563 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.563 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.563 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.563 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:01.563 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.131 00:20:02.131 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:02.131 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:02.131 08:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:02.131 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:02.131 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:02.131 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:02.131 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.131 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:02.131 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:02.131 { 00:20:02.131 "cntlid": 141, 00:20:02.131 "qid": 0, 00:20:02.131 "state": "enabled", 00:20:02.131 "thread": "nvmf_tgt_poll_group_000", 00:20:02.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:02.131 "listen_address": { 00:20:02.131 "trtype": "TCP", 00:20:02.131 "adrfam": "IPv4", 00:20:02.131 "traddr": "10.0.0.2", 00:20:02.131 "trsvcid": "4420" 00:20:02.131 }, 00:20:02.131 "peer_address": { 00:20:02.131 "trtype": "TCP", 00:20:02.131 "adrfam": "IPv4", 00:20:02.131 "traddr": "10.0.0.1", 00:20:02.131 "trsvcid": "49062" 00:20:02.131 }, 00:20:02.131 "auth": { 00:20:02.131 "state": "completed", 00:20:02.131 "digest": "sha512", 00:20:02.131 "dhgroup": "ffdhe8192" 00:20:02.131 } 00:20:02.131 } 00:20:02.131 ]' 00:20:02.131 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:02.131 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:02.131 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:02.390 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:02.390 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:02.390 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:02.390 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:02.390 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:02.649 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:20:02.649 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:01:OWE3ZGRkZWM2OWEyZDdiZDZhNGYwN2NhNjVhZDNlZjh8YHrb: 00:20:03.215 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:03.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:03.215 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:03.215 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.215 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.215 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.215 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:03.215 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:03.215 08:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:20:03.215 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:20:03.215 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:03.215 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:03.215 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:03.215 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:03.215 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:03.215 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:03.216 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:03.216 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.216 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:03.216 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:03.216 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.216 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:03.782 00:20:03.782 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:03.782 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.782 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:04.041 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:04.041 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:04.041 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.041 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.041 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.041 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:04.041 { 00:20:04.041 "cntlid": 143, 00:20:04.041 "qid": 0, 00:20:04.041 "state": "enabled", 00:20:04.041 "thread": "nvmf_tgt_poll_group_000", 00:20:04.041 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:04.041 "listen_address": { 00:20:04.041 "trtype": "TCP", 00:20:04.041 "adrfam": "IPv4", 00:20:04.041 "traddr": "10.0.0.2", 00:20:04.041 "trsvcid": "4420" 00:20:04.041 }, 00:20:04.041 "peer_address": { 00:20:04.041 "trtype": "TCP", 00:20:04.041 "adrfam": "IPv4", 00:20:04.041 "traddr": "10.0.0.1", 00:20:04.041 "trsvcid": "49082" 00:20:04.041 }, 00:20:04.041 "auth": { 00:20:04.041 "state": "completed", 00:20:04.041 "digest": "sha512", 00:20:04.041 "dhgroup": "ffdhe8192" 00:20:04.041 } 00:20:04.041 } 00:20:04.041 ]' 00:20:04.041 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:04.041 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:04.041 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:04.041 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:04.041 08:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:04.041 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:04.041 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.041 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.300 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:20:04.300 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:20:04.867 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:04.867 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:04.867 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:04.867 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:04.867 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:04.867 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:04.867 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:04.867 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:20:04.867 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:20:04.867 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:04.867 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:04.867 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:05.126 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:20:05.126 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:05.126 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:05.126 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:05.126 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:05.126 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.126 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.126 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.126 08:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.126 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.126 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.126 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.126 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:05.692 00:20:05.692 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:05.692 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:05.692 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:05.692 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:05.692 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:05.692 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:05.692 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.692 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:05.692 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:05.692 { 00:20:05.692 "cntlid": 145, 00:20:05.692 "qid": 0, 00:20:05.692 "state": "enabled", 00:20:05.692 "thread": "nvmf_tgt_poll_group_000", 00:20:05.692 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:05.692 "listen_address": { 00:20:05.692 "trtype": "TCP", 00:20:05.692 "adrfam": "IPv4", 00:20:05.692 "traddr": "10.0.0.2", 00:20:05.692 "trsvcid": "4420" 00:20:05.692 }, 00:20:05.692 "peer_address": { 00:20:05.692 "trtype": "TCP", 00:20:05.692 "adrfam": "IPv4", 00:20:05.692 "traddr": "10.0.0.1", 00:20:05.692 "trsvcid": "49114" 00:20:05.692 }, 00:20:05.692 "auth": { 00:20:05.692 "state": "completed", 00:20:05.692 "digest": "sha512", 00:20:05.692 "dhgroup": "ffdhe8192" 00:20:05.692 } 00:20:05.692 } 00:20:05.692 ]' 00:20:05.692 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:05.951 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:05.951 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:05.951 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:05.951 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:05.951 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:05.951 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:05.951 08:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.210 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:20:06.210 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjRmNmMwZmQ3ZmRlNDJkYTA5YjhjOTZiMzQ1YzA1OTZkYzljYmNkYjNmZWZjNjNh0L9xAg==: --dhchap-ctrl-secret DHHC-1:03:YjU4MmJkNzIxN2U4NDc5YTI2NWRjN2FiYzgyNDczNzRmNDQ1NGE3YzA0YzY5ZGQyODc4Y2NmNjZiMzRmNjIyZFh7n/w=: 00:20:06.777 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:06.777 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:06.777 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:06.777 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.777 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.777 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.777 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:20:06.777 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.777 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.777 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.777 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:20:06.777 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:06.777 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:20:06.777 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:06.777 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.777 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:06.777 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:06.777 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:20:06.777 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:06.778 08:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:20:07.036 request: 00:20:07.036 { 00:20:07.036 "name": "nvme0", 00:20:07.036 "trtype": "tcp", 00:20:07.036 "traddr": "10.0.0.2", 00:20:07.036 "adrfam": "ipv4", 00:20:07.036 "trsvcid": "4420", 00:20:07.036 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:07.036 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:07.036 "prchk_reftag": false, 00:20:07.036 "prchk_guard": false, 00:20:07.036 "hdgst": false, 00:20:07.036 "ddgst": false, 00:20:07.036 "dhchap_key": "key2", 00:20:07.036 "allow_unrecognized_csi": false, 00:20:07.036 "method": "bdev_nvme_attach_controller", 00:20:07.036 "req_id": 1 00:20:07.036 } 00:20:07.036 Got JSON-RPC error response 00:20:07.036 response: 00:20:07.036 { 00:20:07.036 "code": -5, 00:20:07.036 "message": "Input/output error" 00:20:07.036 } 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:07.295 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:20:07.554 request: 00:20:07.554 { 00:20:07.554 "name": "nvme0", 00:20:07.554 "trtype": "tcp", 00:20:07.554 "traddr": "10.0.0.2", 00:20:07.554 "adrfam": "ipv4", 00:20:07.554 "trsvcid": "4420", 00:20:07.554 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:07.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:07.554 "prchk_reftag": false, 00:20:07.554 "prchk_guard": false, 00:20:07.554 "hdgst": false, 00:20:07.554 "ddgst": false, 00:20:07.554 "dhchap_key": "key1", 00:20:07.554 "dhchap_ctrlr_key": "ckey2", 00:20:07.554 "allow_unrecognized_csi": false, 00:20:07.554 "method": "bdev_nvme_attach_controller", 00:20:07.554 "req_id": 1 00:20:07.554 } 00:20:07.554 Got JSON-RPC error response 00:20:07.554 response: 00:20:07.554 { 00:20:07.554 "code": -5, 00:20:07.554 "message": "Input/output error" 00:20:07.554 } 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:07.554 08:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:08.120 request: 00:20:08.120 { 00:20:08.120 "name": "nvme0", 00:20:08.120 "trtype": "tcp", 00:20:08.120 "traddr": "10.0.0.2", 00:20:08.120 "adrfam": "ipv4", 00:20:08.120 "trsvcid": "4420", 00:20:08.120 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:08.120 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:08.120 "prchk_reftag": false, 00:20:08.120 "prchk_guard": false, 00:20:08.120 "hdgst": false, 00:20:08.120 "ddgst": false, 00:20:08.120 "dhchap_key": "key1", 00:20:08.120 "dhchap_ctrlr_key": "ckey1", 00:20:08.120 "allow_unrecognized_csi": false, 00:20:08.120 "method": "bdev_nvme_attach_controller", 00:20:08.120 "req_id": 1 00:20:08.120 } 00:20:08.120 Got JSON-RPC error response 00:20:08.120 response: 00:20:08.120 { 00:20:08.120 "code": -5, 00:20:08.120 "message": "Input/output error" 00:20:08.120 } 00:20:08.120 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:08.120 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:08.120 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:08.120 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:08.120 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:08.120 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.120 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.120 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.120 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 1679233 00:20:08.120 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1679233 ']' 00:20:08.120 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1679233 00:20:08.120 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:08.120 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:08.120 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1679233 00:20:08.120 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:08.120 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:08.120 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1679233' 00:20:08.120 killing process with pid 1679233 00:20:08.120 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1679233 00:20:08.120 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1679233 00:20:08.379 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:20:08.379 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:08.379 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:08.379 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.379 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=1700719 00:20:08.379 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 1700719 00:20:08.379 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:20:08.379 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1700719 ']' 00:20:08.379 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.379 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.379 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.379 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.379 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.638 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.638 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:08.638 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:08.638 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:08.638 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.638 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.638 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:08.638 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 1700719 00:20:08.638 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 1700719 ']' 00:20:08.638 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.638 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.638 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.638 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.638 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.638 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.897 null0 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.Xil 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.N7S ]] 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.N7S 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.rYa 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.cCy ]] 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.cCy 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.oLO 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.NJd ]] 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.NJd 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.kYm 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:08.897 08:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:09.834 nvme0n1 00:20:09.834 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.834 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.834 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.834 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.834 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.834 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.834 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.834 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.834 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.834 { 00:20:09.834 "cntlid": 1, 00:20:09.834 "qid": 0, 00:20:09.834 "state": "enabled", 00:20:09.834 "thread": "nvmf_tgt_poll_group_000", 00:20:09.834 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:09.834 "listen_address": { 00:20:09.834 "trtype": "TCP", 00:20:09.834 "adrfam": "IPv4", 00:20:09.834 "traddr": "10.0.0.2", 00:20:09.834 "trsvcid": "4420" 00:20:09.834 }, 00:20:09.834 "peer_address": { 00:20:09.834 "trtype": "TCP", 00:20:09.834 "adrfam": "IPv4", 00:20:09.834 "traddr": "10.0.0.1", 00:20:09.834 "trsvcid": "35900" 00:20:09.834 }, 00:20:09.834 "auth": { 00:20:09.834 "state": "completed", 00:20:09.834 "digest": "sha512", 00:20:09.834 "dhgroup": "ffdhe8192" 00:20:09.834 } 00:20:09.834 } 00:20:09.834 ]' 00:20:09.834 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:10.092 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:20:10.092 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:10.092 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:10.092 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:10.092 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:10.092 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:10.092 08:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:10.350 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:20:10.350 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:20:10.917 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.917 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:10.917 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.917 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.917 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.917 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:20:10.917 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:10.917 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.917 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:10.917 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:20:10.917 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:20:10.917 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:10.917 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:10.917 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:10.917 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:11.178 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.178 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:11.178 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.178 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:11.178 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.178 08:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.178 request: 00:20:11.178 { 00:20:11.178 "name": "nvme0", 00:20:11.178 "trtype": "tcp", 00:20:11.178 "traddr": "10.0.0.2", 00:20:11.178 "adrfam": "ipv4", 00:20:11.178 "trsvcid": "4420", 00:20:11.178 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:11.178 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:11.178 "prchk_reftag": false, 00:20:11.178 "prchk_guard": false, 00:20:11.178 "hdgst": false, 00:20:11.178 "ddgst": false, 00:20:11.178 "dhchap_key": "key3", 00:20:11.178 "allow_unrecognized_csi": false, 00:20:11.178 "method": "bdev_nvme_attach_controller", 00:20:11.178 "req_id": 1 00:20:11.178 } 00:20:11.178 Got JSON-RPC error response 00:20:11.178 response: 00:20:11.178 { 00:20:11.178 "code": -5, 00:20:11.178 "message": "Input/output error" 00:20:11.178 } 00:20:11.178 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:11.178 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:11.178 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:11.178 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:11.178 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:20:11.178 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:20:11.178 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:11.178 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:20:11.495 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:20:11.495 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:11.495 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:20:11.495 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:11.495 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.495 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:11.495 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.495 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:11.495 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.495 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:11.787 request: 00:20:11.787 { 00:20:11.787 "name": "nvme0", 00:20:11.787 "trtype": "tcp", 00:20:11.787 "traddr": "10.0.0.2", 00:20:11.787 "adrfam": "ipv4", 00:20:11.787 "trsvcid": "4420", 00:20:11.787 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:11.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:11.787 "prchk_reftag": false, 00:20:11.787 "prchk_guard": false, 00:20:11.787 "hdgst": false, 00:20:11.787 "ddgst": false, 00:20:11.787 "dhchap_key": "key3", 00:20:11.787 "allow_unrecognized_csi": false, 00:20:11.787 "method": "bdev_nvme_attach_controller", 00:20:11.787 "req_id": 1 00:20:11.787 } 00:20:11.787 Got JSON-RPC error response 00:20:11.787 response: 00:20:11.787 { 00:20:11.787 "code": -5, 00:20:11.787 "message": "Input/output error" 00:20:11.787 } 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:11.787 08:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:12.072 request: 00:20:12.072 { 00:20:12.072 "name": "nvme0", 00:20:12.072 "trtype": "tcp", 00:20:12.072 "traddr": "10.0.0.2", 00:20:12.072 "adrfam": "ipv4", 00:20:12.072 "trsvcid": "4420", 00:20:12.072 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:12.072 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:12.072 "prchk_reftag": false, 00:20:12.072 "prchk_guard": false, 00:20:12.072 "hdgst": false, 00:20:12.072 "ddgst": false, 00:20:12.072 "dhchap_key": "key0", 00:20:12.072 "dhchap_ctrlr_key": "key1", 00:20:12.072 "allow_unrecognized_csi": false, 00:20:12.072 "method": "bdev_nvme_attach_controller", 00:20:12.072 "req_id": 1 00:20:12.072 } 00:20:12.072 Got JSON-RPC error response 00:20:12.072 response: 00:20:12.072 { 00:20:12.072 "code": -5, 00:20:12.072 "message": "Input/output error" 00:20:12.072 } 00:20:12.331 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:12.331 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:12.331 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:12.331 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:12.331 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:20:12.331 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:12.331 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:20:12.589 nvme0n1 00:20:12.589 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:20:12.589 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:20:12.589 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:12.589 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:12.589 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:12.589 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.848 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:20:12.848 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.848 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.848 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.848 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:12.848 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:12.848 08:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:13.783 nvme0n1 00:20:13.783 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:20:13.783 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.783 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:20:13.783 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.783 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:13.783 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.783 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.783 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.783 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:20:13.783 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:20:13.783 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.042 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.042 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:20:14.042 08:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: --dhchap-ctrl-secret DHHC-1:03:NTRjZjhhNTBkYmU5NGIyYjgzOTdmYmU0NzdmOWM2N2M3YWU0NDljYmRkMTA5MzQ1MGNkZmUwZWFkNDE1NjM5NlOnohk=: 00:20:14.609 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:20:14.609 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:20:14.609 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:20:14.609 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:20:14.609 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:20:14.609 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:20:14.609 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:20:14.609 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.609 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:14.867 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:20:14.867 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:14.867 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:20:14.867 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:20:14.867 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.867 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:20:14.867 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:14.867 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:20:14.867 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:14.867 08:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:20:15.125 request: 00:20:15.125 { 00:20:15.125 "name": "nvme0", 00:20:15.125 "trtype": "tcp", 00:20:15.125 "traddr": "10.0.0.2", 00:20:15.125 "adrfam": "ipv4", 00:20:15.125 "trsvcid": "4420", 00:20:15.125 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:20:15.125 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:20:15.125 "prchk_reftag": false, 00:20:15.125 "prchk_guard": false, 00:20:15.125 "hdgst": false, 00:20:15.125 "ddgst": false, 00:20:15.125 "dhchap_key": "key1", 00:20:15.125 "allow_unrecognized_csi": false, 00:20:15.125 "method": "bdev_nvme_attach_controller", 00:20:15.125 "req_id": 1 00:20:15.125 } 00:20:15.125 Got JSON-RPC error response 00:20:15.125 response: 00:20:15.125 { 00:20:15.125 "code": -5, 00:20:15.125 "message": "Input/output error" 00:20:15.125 } 00:20:15.125 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:15.125 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:15.125 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:15.125 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:15.125 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:15.125 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:15.125 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:16.059 nvme0n1 00:20:16.059 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:20:16.059 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:20:16.059 08:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.059 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.059 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.059 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:16.317 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:16.317 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.317 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.317 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.317 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:20:16.317 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:16.317 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:20:16.575 nvme0n1 00:20:16.575 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:20:16.575 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:20:16.575 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.834 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.834 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.834 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.091 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:17.091 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.091 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.091 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.091 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: '' 2s 00:20:17.091 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:17.091 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:17.091 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: 00:20:17.091 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:20:17.091 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:17.091 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:17.091 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: ]] 00:20:17.091 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:MWYxZDgyZGY2ZGQyMDkzOGYxN2I3ZDhkNzA0YTQ5ZDQuRmPP: 00:20:17.091 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:20:17.091 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:17.091 08:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:18.990 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:20:18.990 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:18.990 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:18.990 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:18.990 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:18.990 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:18.990 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:18.990 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:20:18.990 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.990 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.990 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.990 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: 2s 00:20:18.990 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:20:18.990 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:20:18.991 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:20:18.991 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: 00:20:18.991 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:20:18.991 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:20:18.991 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:20:18.991 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: ]] 00:20:18.991 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:NjY4MDRhNDMwZmY4Nzc0MzA1NTk1MmJmMDk3ZTlkYmY2ZjlmYTI5YWQzZjU1ZDky+6rGXg==: 00:20:18.991 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:20:18.991 08:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:20:21.524 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:20:21.524 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:20:21.524 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:20:21.524 08:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:20:21.524 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:20:21.524 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:20:21.524 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:20:21.524 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:21.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:21.524 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:21.524 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.524 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.524 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.524 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:21.524 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:21.524 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:21.783 nvme0n1 00:20:22.041 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:22.041 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.041 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.041 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.041 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:22.041 08:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:22.299 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:20:22.299 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:20:22.299 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:22.558 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:22.558 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:22.558 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.558 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.558 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.558 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:20:22.558 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:20:22.816 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:20:22.816 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:20:22.816 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.075 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.075 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:23.075 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.075 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.075 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.075 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:23.075 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:23.075 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:23.075 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:23.075 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:23.075 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:23.075 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:23.075 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:23.075 08:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:20:23.333 request: 00:20:23.333 { 00:20:23.333 "name": "nvme0", 00:20:23.333 "dhchap_key": "key1", 00:20:23.333 "dhchap_ctrlr_key": "key3", 00:20:23.333 "method": "bdev_nvme_set_keys", 00:20:23.333 "req_id": 1 00:20:23.333 } 00:20:23.333 Got JSON-RPC error response 00:20:23.333 response: 00:20:23.333 { 00:20:23.333 "code": -13, 00:20:23.333 "message": "Permission denied" 00:20:23.333 } 00:20:23.592 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:23.592 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:23.592 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:23.592 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:23.592 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:23.592 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:23.592 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.592 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:20:23.592 08:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:20:24.967 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:20:24.967 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:20:24.967 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.967 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:20:24.967 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:20:24.967 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.967 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.967 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.967 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:24.967 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:24.967 08:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:20:25.534 nvme0n1 00:20:25.534 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:20:25.534 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.534 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.534 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.534 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:25.534 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:20:25.534 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:25.534 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:20:25.534 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.534 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:20:25.534 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:25.534 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:25.534 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:20:26.102 request: 00:20:26.102 { 00:20:26.102 "name": "nvme0", 00:20:26.102 "dhchap_key": "key2", 00:20:26.102 "dhchap_ctrlr_key": "key0", 00:20:26.102 "method": "bdev_nvme_set_keys", 00:20:26.102 "req_id": 1 00:20:26.102 } 00:20:26.102 Got JSON-RPC error response 00:20:26.102 response: 00:20:26.102 { 00:20:26.102 "code": -13, 00:20:26.102 "message": "Permission denied" 00:20:26.102 } 00:20:26.102 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:20:26.102 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:26.102 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:20:26.102 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:26.102 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:26.102 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:26.102 08:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.360 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:20:26.360 08:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:20:27.299 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:20:27.299 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:20:27.299 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.557 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:20:27.557 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:20:27.557 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:20:27.557 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 1679254 00:20:27.557 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1679254 ']' 00:20:27.557 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1679254 00:20:27.557 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:27.557 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.557 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1679254 00:20:27.557 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:27.557 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:27.557 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1679254' 00:20:27.557 killing process with pid 1679254 00:20:27.557 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1679254 00:20:27.557 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1679254 00:20:27.816 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:20:27.816 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:27.816 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@99 -- # sync 00:20:27.816 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:27.816 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # set +e 00:20:27.816 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:27.816 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:27.816 rmmod nvme_tcp 00:20:27.816 rmmod nvme_fabrics 00:20:27.816 rmmod nvme_keyring 00:20:27.816 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:27.816 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # set -e 00:20:27.816 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # return 0 00:20:27.816 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # '[' -n 1700719 ']' 00:20:27.816 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@337 -- # killprocess 1700719 00:20:27.816 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 1700719 ']' 00:20:27.816 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 1700719 00:20:27.816 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:20:27.816 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:27.816 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1700719 00:20:28.075 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:28.075 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:28.075 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1700719' 00:20:28.075 killing process with pid 1700719 00:20:28.075 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 1700719 00:20:28.075 08:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 1700719 00:20:28.075 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:28.075 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # nvmf_fini 00:20:28.075 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@254 -- # local dev 00:20:28.075 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:20:28.075 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:28.075 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:28.075 08:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@121 -- # return 0 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # _dev=0 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # dev_map=() 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@274 -- # iptr 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # iptables-save 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:20:30.609 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@548 -- # iptables-restore 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.Xil /tmp/spdk.key-sha256.rYa /tmp/spdk.key-sha384.oLO /tmp/spdk.key-sha512.kYm /tmp/spdk.key-sha512.N7S /tmp/spdk.key-sha384.cCy /tmp/spdk.key-sha256.NJd '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:20:30.610 00:20:30.610 real 2m31.516s 00:20:30.610 user 5m48.981s 00:20:30.610 sys 0m24.141s 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.610 ************************************ 00:20:30.610 END TEST nvmf_auth_target 00:20:30.610 ************************************ 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:30.610 ************************************ 00:20:30.610 START TEST nvmf_bdevio_no_huge 00:20:30.610 ************************************ 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:20:30.610 * Looking for test storage... 00:20:30.610 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:30.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.610 --rc genhtml_branch_coverage=1 00:20:30.610 --rc genhtml_function_coverage=1 00:20:30.610 --rc genhtml_legend=1 00:20:30.610 --rc geninfo_all_blocks=1 00:20:30.610 --rc geninfo_unexecuted_blocks=1 00:20:30.610 00:20:30.610 ' 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:30.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.610 --rc genhtml_branch_coverage=1 00:20:30.610 --rc genhtml_function_coverage=1 00:20:30.610 --rc genhtml_legend=1 00:20:30.610 --rc geninfo_all_blocks=1 00:20:30.610 --rc geninfo_unexecuted_blocks=1 00:20:30.610 00:20:30.610 ' 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:30.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.610 --rc genhtml_branch_coverage=1 00:20:30.610 --rc genhtml_function_coverage=1 00:20:30.610 --rc genhtml_legend=1 00:20:30.610 --rc geninfo_all_blocks=1 00:20:30.610 --rc geninfo_unexecuted_blocks=1 00:20:30.610 00:20:30.610 ' 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:30.610 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:30.610 --rc genhtml_branch_coverage=1 00:20:30.610 --rc genhtml_function_coverage=1 00:20:30.610 --rc genhtml_legend=1 00:20:30.610 --rc geninfo_all_blocks=1 00:20:30.610 --rc geninfo_unexecuted_blocks=1 00:20:30.610 00:20:30.610 ' 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.610 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@50 -- # : 0 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:30.611 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # remove_target_ns 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # xtrace_disable 00:20:30.611 08:17:44 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@131 -- # pci_devs=() 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@131 -- # local -a pci_devs 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@132 -- # pci_net_devs=() 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@133 -- # pci_drivers=() 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@133 -- # local -A pci_drivers 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@135 -- # net_devs=() 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@135 -- # local -ga net_devs 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@136 -- # e810=() 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@136 -- # local -ga e810 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@137 -- # x722=() 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@137 -- # local -ga x722 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@138 -- # mlx=() 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@138 -- # local -ga mlx 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:37.179 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:37.180 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:37.180 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:37.180 Found net devices under 0000:86:00.0: cvl_0_0 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:37.180 Found net devices under 0000:86:00.1: cvl_0_1 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # is_hw=yes 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@247 -- # create_target_ns 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@28 -- # local -g _dev 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # ips=() 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772161 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:20:37.180 10.0.0.1 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772162 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:37.180 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:20:37.181 10.0.0.2 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@38 -- # ping_ips 1 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:37.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:37.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.409 ms 00:20:37.181 00:20:37.181 --- 10.0.0.1 ping statistics --- 00:20:37.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.181 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target0 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:20:37.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:37.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:20:37.181 00:20:37.181 --- 10.0.0.2 ping statistics --- 00:20:37.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:37.181 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # return 0 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:20:37.181 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # return 1 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev= 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@160 -- # return 0 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target0 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # local dev=target1 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # return 1 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@159 -- # dev= 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@160 -- # return 0 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:20:37.182 ' 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # nvmfpid=1707479 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # waitforlisten 1707479 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 1707479 ']' 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:37.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:37.182 08:17:50 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.182 [2024-11-20 08:17:50.514032] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:20:37.182 [2024-11-20 08:17:50.514085] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:20:37.182 [2024-11-20 08:17:50.600640] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:37.182 [2024-11-20 08:17:50.646859] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:37.182 [2024-11-20 08:17:50.646890] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:37.182 [2024-11-20 08:17:50.646897] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:37.182 [2024-11-20 08:17:50.646906] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:37.182 [2024-11-20 08:17:50.646911] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:37.182 [2024-11-20 08:17:50.648135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:37.182 [2024-11-20 08:17:50.648235] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:20:37.182 [2024-11-20 08:17:50.648341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:37.182 [2024-11-20 08:17:50.648342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.441 [2024-11-20 08:17:51.402330] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.441 Malloc0 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:37.441 [2024-11-20 08:17:51.446628] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # config=() 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # local subsystem config 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:37.441 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:37.441 { 00:20:37.441 "params": { 00:20:37.442 "name": "Nvme$subsystem", 00:20:37.442 "trtype": "$TEST_TRANSPORT", 00:20:37.442 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:37.442 "adrfam": "ipv4", 00:20:37.442 "trsvcid": "$NVMF_PORT", 00:20:37.442 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:37.442 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:37.442 "hdgst": ${hdgst:-false}, 00:20:37.442 "ddgst": ${ddgst:-false} 00:20:37.442 }, 00:20:37.442 "method": "bdev_nvme_attach_controller" 00:20:37.442 } 00:20:37.442 EOF 00:20:37.442 )") 00:20:37.442 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # cat 00:20:37.442 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # jq . 00:20:37.442 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@397 -- # IFS=, 00:20:37.700 08:17:51 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:20:37.700 "params": { 00:20:37.700 "name": "Nvme1", 00:20:37.700 "trtype": "tcp", 00:20:37.700 "traddr": "10.0.0.2", 00:20:37.700 "adrfam": "ipv4", 00:20:37.700 "trsvcid": "4420", 00:20:37.700 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:37.700 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:37.700 "hdgst": false, 00:20:37.700 "ddgst": false 00:20:37.700 }, 00:20:37.700 "method": "bdev_nvme_attach_controller" 00:20:37.700 }' 00:20:37.700 [2024-11-20 08:17:51.495931] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:20:37.700 [2024-11-20 08:17:51.495976] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid1707673 ] 00:20:37.700 [2024-11-20 08:17:51.576109] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:37.700 [2024-11-20 08:17:51.623988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:37.700 [2024-11-20 08:17:51.624095] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.700 [2024-11-20 08:17:51.624096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:37.957 I/O targets: 00:20:37.957 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:20:37.957 00:20:37.957 00:20:37.957 CUnit - A unit testing framework for C - Version 2.1-3 00:20:37.957 http://cunit.sourceforge.net/ 00:20:37.957 00:20:37.957 00:20:37.957 Suite: bdevio tests on: Nvme1n1 00:20:37.957 Test: blockdev write read block ...passed 00:20:37.957 Test: blockdev write zeroes read block ...passed 00:20:37.957 Test: blockdev write zeroes read no split ...passed 00:20:37.957 Test: blockdev write zeroes read split ...passed 00:20:37.957 Test: blockdev write zeroes read split partial ...passed 00:20:37.957 Test: blockdev reset ...[2024-11-20 08:17:51.910357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:37.957 [2024-11-20 08:17:51.910422] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12d4920 (9): Bad file descriptor 00:20:37.957 [2024-11-20 08:17:51.964446] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:20:37.957 passed 00:20:37.957 Test: blockdev write read 8 blocks ...passed 00:20:37.957 Test: blockdev write read size > 128k ...passed 00:20:37.957 Test: blockdev write read invalid size ...passed 00:20:38.215 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:20:38.215 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:20:38.215 Test: blockdev write read max offset ...passed 00:20:38.215 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:20:38.215 Test: blockdev writev readv 8 blocks ...passed 00:20:38.215 Test: blockdev writev readv 30 x 1block ...passed 00:20:38.215 Test: blockdev writev readv block ...passed 00:20:38.215 Test: blockdev writev readv size > 128k ...passed 00:20:38.215 Test: blockdev writev readv size > 128k in two iovs ...passed 00:20:38.215 Test: blockdev comparev and writev ...[2024-11-20 08:17:52.176231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.215 [2024-11-20 08:17:52.176266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:20:38.215 [2024-11-20 08:17:52.176280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.215 [2024-11-20 08:17:52.176288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:20:38.215 [2024-11-20 08:17:52.176514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.215 [2024-11-20 08:17:52.176525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:20:38.215 [2024-11-20 08:17:52.176537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.215 [2024-11-20 08:17:52.176543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:20:38.215 [2024-11-20 08:17:52.176776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.215 [2024-11-20 08:17:52.176786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:20:38.215 [2024-11-20 08:17:52.176797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.215 [2024-11-20 08:17:52.176804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:20:38.215 [2024-11-20 08:17:52.177036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.215 [2024-11-20 08:17:52.177046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:20:38.215 [2024-11-20 08:17:52.177058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:20:38.215 [2024-11-20 08:17:52.177065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:20:38.215 passed 00:20:38.473 Test: blockdev nvme passthru rw ...passed 00:20:38.473 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:17:52.259507] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.473 [2024-11-20 08:17:52.259526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:20:38.473 [2024-11-20 08:17:52.259632] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.473 [2024-11-20 08:17:52.259643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:20:38.473 [2024-11-20 08:17:52.259756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.473 [2024-11-20 08:17:52.259766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:20:38.473 [2024-11-20 08:17:52.259881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:38.474 [2024-11-20 08:17:52.259892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:20:38.474 passed 00:20:38.474 Test: blockdev nvme admin passthru ...passed 00:20:38.474 Test: blockdev copy ...passed 00:20:38.474 00:20:38.474 Run Summary: Type Total Ran Passed Failed Inactive 00:20:38.474 suites 1 1 n/a 0 0 00:20:38.474 tests 23 23 23 0 0 00:20:38.474 asserts 152 152 152 0 n/a 00:20:38.474 00:20:38.474 Elapsed time = 1.064 seconds 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@99 -- # sync 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@102 -- # set +e 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:38.732 rmmod nvme_tcp 00:20:38.732 rmmod nvme_fabrics 00:20:38.732 rmmod nvme_keyring 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@106 -- # set -e 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@107 -- # return 0 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # '[' -n 1707479 ']' 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@337 -- # killprocess 1707479 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 1707479 ']' 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 1707479 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1707479 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1707479' 00:20:38.732 killing process with pid 1707479 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 1707479 00:20:38.732 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 1707479 00:20:38.991 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:38.991 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # nvmf_fini 00:20:38.991 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@254 -- # local dev 00:20:38.991 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@257 -- # remove_target_ns 00:20:38.991 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:38.991 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:38.991 08:17:52 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@258 -- # delete_main_bridge 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@121 -- # return 0 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # _dev=0 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # dev_map=() 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@274 -- # iptr 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # iptables-save 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@548 -- # iptables-restore 00:20:41.526 00:20:41.526 real 0m10.885s 00:20:41.526 user 0m12.987s 00:20:41.526 sys 0m5.441s 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:20:41.526 ************************************ 00:20:41.526 END TEST nvmf_bdevio_no_huge 00:20:41.526 ************************************ 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:41.526 ************************************ 00:20:41.526 START TEST nvmf_tls 00:20:41.526 ************************************ 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:20:41.526 * Looking for test storage... 00:20:41.526 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:20:41.526 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:41.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.527 --rc genhtml_branch_coverage=1 00:20:41.527 --rc genhtml_function_coverage=1 00:20:41.527 --rc genhtml_legend=1 00:20:41.527 --rc geninfo_all_blocks=1 00:20:41.527 --rc geninfo_unexecuted_blocks=1 00:20:41.527 00:20:41.527 ' 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:41.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.527 --rc genhtml_branch_coverage=1 00:20:41.527 --rc genhtml_function_coverage=1 00:20:41.527 --rc genhtml_legend=1 00:20:41.527 --rc geninfo_all_blocks=1 00:20:41.527 --rc geninfo_unexecuted_blocks=1 00:20:41.527 00:20:41.527 ' 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:41.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.527 --rc genhtml_branch_coverage=1 00:20:41.527 --rc genhtml_function_coverage=1 00:20:41.527 --rc genhtml_legend=1 00:20:41.527 --rc geninfo_all_blocks=1 00:20:41.527 --rc geninfo_unexecuted_blocks=1 00:20:41.527 00:20:41.527 ' 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:41.527 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:41.527 --rc genhtml_branch_coverage=1 00:20:41.527 --rc genhtml_function_coverage=1 00:20:41.527 --rc genhtml_legend=1 00:20:41.527 --rc geninfo_all_blocks=1 00:20:41.527 --rc geninfo_unexecuted_blocks=1 00:20:41.527 00:20:41.527 ' 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@50 -- # : 0 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:41.527 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # remove_target_ns 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # xtrace_disable 00:20:41.527 08:17:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@131 -- # pci_devs=() 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@131 -- # local -a pci_devs 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@132 -- # pci_net_devs=() 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@133 -- # pci_drivers=() 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@133 -- # local -A pci_drivers 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@135 -- # net_devs=() 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@135 -- # local -ga net_devs 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@136 -- # e810=() 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@136 -- # local -ga e810 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@137 -- # x722=() 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@137 -- # local -ga x722 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@138 -- # mlx=() 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@138 -- # local -ga mlx 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:48.096 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:48.097 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:48.097 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:48.097 Found net devices under 0000:86:00.0: cvl_0_0 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:48.097 Found net devices under 0000:86:00.1: cvl_0_1 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # is_hw=yes 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@247 -- # create_target_ns 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@28 -- # local -g _dev 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # ips=() 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772161 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:20:48.097 10.0.0.1 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772162 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:20:48.097 10.0.0.2 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:20:48.097 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@38 -- # ping_ips 1 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:48.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:48.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.421 ms 00:20:48.098 00:20:48.098 --- 10.0.0.1 ping statistics --- 00:20:48.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.098 rtt min/avg/max/mdev = 0.421/0.421/0.421/0.000 ms 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target0 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:20:48.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:48.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:20:48.098 00:20:48.098 --- 10.0.0.2 ping statistics --- 00:20:48.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:48.098 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair++ )) 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # return 0 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator0 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=initiator1 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # return 1 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev= 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@160 -- # return 0 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:48.098 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target0 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target0 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # get_net_dev target1 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # local dev=target1 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # return 1 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@159 -- # dev= 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@160 -- # return 0 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:20:48.099 ' 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=1711557 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 1711557 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1711557 ']' 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:48.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:48.099 08:18:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.099 [2024-11-20 08:18:01.532299] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:20:48.099 [2024-11-20 08:18:01.532344] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:48.099 [2024-11-20 08:18:01.613212] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:48.099 [2024-11-20 08:18:01.653285] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:48.099 [2024-11-20 08:18:01.653320] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:48.099 [2024-11-20 08:18:01.653328] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:48.099 [2024-11-20 08:18:01.653334] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:48.099 [2024-11-20 08:18:01.653339] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:48.099 [2024-11-20 08:18:01.653874] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:48.357 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.357 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:20:48.357 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:48.357 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:48.357 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:48.615 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.615 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:20:48.615 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:20:48.615 true 00:20:48.615 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:48.615 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:20:48.874 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:20:48.874 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:20:48.874 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:49.132 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:49.132 08:18:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:20:49.390 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:20:49.390 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:20:49.390 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:20:49.390 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:49.390 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:20:49.648 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:20:49.648 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:20:49.648 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:49.648 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:20:49.907 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:20:49.907 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:20:49.907 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:20:50.165 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:50.165 08:18:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:20:50.165 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:20:50.165 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:20:50.165 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:20:50.424 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:20:50.424 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=ffeeddccbbaa99887766554433221100 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.G7KZOvugoV 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.QjJGuqDt0f 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.G7KZOvugoV 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.QjJGuqDt0f 00:20:50.683 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:20:50.942 08:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:20:51.200 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.G7KZOvugoV 00:20:51.200 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.G7KZOvugoV 00:20:51.200 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:51.200 [2024-11-20 08:18:05.220650] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:51.458 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:20:51.458 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:20:51.717 [2024-11-20 08:18:05.605635] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:51.717 [2024-11-20 08:18:05.605835] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:51.717 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:20:51.976 malloc0 00:20:51.976 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:20:51.976 08:18:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.G7KZOvugoV 00:20:52.235 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:20:52.493 08:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.G7KZOvugoV 00:21:02.469 Initializing NVMe Controllers 00:21:02.469 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:02.469 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:02.469 Initialization complete. Launching workers. 00:21:02.469 ======================================================== 00:21:02.469 Latency(us) 00:21:02.469 Device Information : IOPS MiB/s Average min max 00:21:02.469 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16782.67 65.56 3813.55 802.21 5979.58 00:21:02.469 ======================================================== 00:21:02.469 Total : 16782.67 65.56 3813.55 802.21 5979.58 00:21:02.469 00:21:02.469 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.G7KZOvugoV 00:21:02.469 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:02.469 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:02.469 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:02.469 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.G7KZOvugoV 00:21:02.469 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:02.469 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1714535 00:21:02.469 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:02.469 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:02.469 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1714535 /var/tmp/bdevperf.sock 00:21:02.469 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1714535 ']' 00:21:02.469 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:02.469 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.469 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:02.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:02.469 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.469 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:02.728 [2024-11-20 08:18:16.521011] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:21:02.728 [2024-11-20 08:18:16.521055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1714535 ] 00:21:02.728 [2024-11-20 08:18:16.595163] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:02.728 [2024-11-20 08:18:16.636176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.728 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:02.728 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:02.728 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.G7KZOvugoV 00:21:02.987 08:18:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:03.246 [2024-11-20 08:18:17.078531] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:03.246 TLSTESTn1 00:21:03.246 08:18:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:03.246 Running I/O for 10 seconds... 00:21:05.561 5414.00 IOPS, 21.15 MiB/s [2024-11-20T07:18:20.526Z] 5402.00 IOPS, 21.10 MiB/s [2024-11-20T07:18:21.473Z] 5454.33 IOPS, 21.31 MiB/s [2024-11-20T07:18:22.440Z] 5488.25 IOPS, 21.44 MiB/s [2024-11-20T07:18:23.438Z] 5500.80 IOPS, 21.49 MiB/s [2024-11-20T07:18:24.374Z] 5521.83 IOPS, 21.57 MiB/s [2024-11-20T07:18:25.309Z] 5537.43 IOPS, 21.63 MiB/s [2024-11-20T07:18:26.687Z] 5554.50 IOPS, 21.70 MiB/s [2024-11-20T07:18:27.625Z] 5554.78 IOPS, 21.70 MiB/s [2024-11-20T07:18:27.625Z] 5560.90 IOPS, 21.72 MiB/s 00:21:13.597 Latency(us) 00:21:13.597 [2024-11-20T07:18:27.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.597 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:13.597 Verification LBA range: start 0x0 length 0x2000 00:21:13.597 TLSTESTn1 : 10.02 5563.95 21.73 0.00 0.00 22968.66 6272.73 23842.62 00:21:13.597 [2024-11-20T07:18:27.625Z] =================================================================================================================== 00:21:13.597 [2024-11-20T07:18:27.625Z] Total : 5563.95 21.73 0.00 0.00 22968.66 6272.73 23842.62 00:21:13.597 { 00:21:13.597 "results": [ 00:21:13.597 { 00:21:13.597 "job": "TLSTESTn1", 00:21:13.597 "core_mask": "0x4", 00:21:13.597 "workload": "verify", 00:21:13.597 "status": "finished", 00:21:13.597 "verify_range": { 00:21:13.597 "start": 0, 00:21:13.597 "length": 8192 00:21:13.597 }, 00:21:13.597 "queue_depth": 128, 00:21:13.597 "io_size": 4096, 00:21:13.597 "runtime": 10.017163, 00:21:13.597 "iops": 5563.950591599638, 00:21:13.597 "mibps": 21.734181998436085, 00:21:13.597 "io_failed": 0, 00:21:13.597 "io_timeout": 0, 00:21:13.597 "avg_latency_us": 22968.656998261333, 00:21:13.597 "min_latency_us": 6272.731428571428, 00:21:13.597 "max_latency_us": 23842.620952380952 00:21:13.597 } 00:21:13.597 ], 00:21:13.597 "core_count": 1 00:21:13.597 } 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1714535 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1714535 ']' 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1714535 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1714535 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1714535' 00:21:13.597 killing process with pid 1714535 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1714535 00:21:13.597 Received shutdown signal, test time was about 10.000000 seconds 00:21:13.597 00:21:13.597 Latency(us) 00:21:13.597 [2024-11-20T07:18:27.625Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:13.597 [2024-11-20T07:18:27.625Z] =================================================================================================================== 00:21:13.597 [2024-11-20T07:18:27.625Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1714535 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QjJGuqDt0f 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QjJGuqDt0f 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.QjJGuqDt0f 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:13.597 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:13.598 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:13.598 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.QjJGuqDt0f 00:21:13.598 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:13.598 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1716249 00:21:13.598 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:13.598 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:13.598 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1716249 /var/tmp/bdevperf.sock 00:21:13.598 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1716249 ']' 00:21:13.598 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:13.598 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:13.598 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:13.598 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:13.598 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:13.598 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:13.598 [2024-11-20 08:18:27.585364] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:21:13.598 [2024-11-20 08:18:27.585417] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1716249 ] 00:21:13.857 [2024-11-20 08:18:27.662067] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:13.857 [2024-11-20 08:18:27.700475] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:13.857 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.857 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:13.857 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.QjJGuqDt0f 00:21:14.116 08:18:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:14.376 [2024-11-20 08:18:28.159044] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:14.376 [2024-11-20 08:18:28.163745] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:14.376 [2024-11-20 08:18:28.164395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da6170 (107): Transport endpoint is not connected 00:21:14.376 [2024-11-20 08:18:28.165387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1da6170 (9): Bad file descriptor 00:21:14.376 [2024-11-20 08:18:28.166388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:14.376 [2024-11-20 08:18:28.166404] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:14.376 [2024-11-20 08:18:28.166412] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:14.376 [2024-11-20 08:18:28.166427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:14.376 request: 00:21:14.376 { 00:21:14.376 "name": "TLSTEST", 00:21:14.376 "trtype": "tcp", 00:21:14.376 "traddr": "10.0.0.2", 00:21:14.376 "adrfam": "ipv4", 00:21:14.376 "trsvcid": "4420", 00:21:14.376 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:14.376 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:14.376 "prchk_reftag": false, 00:21:14.376 "prchk_guard": false, 00:21:14.376 "hdgst": false, 00:21:14.376 "ddgst": false, 00:21:14.376 "psk": "key0", 00:21:14.376 "allow_unrecognized_csi": false, 00:21:14.376 "method": "bdev_nvme_attach_controller", 00:21:14.376 "req_id": 1 00:21:14.376 } 00:21:14.376 Got JSON-RPC error response 00:21:14.376 response: 00:21:14.376 { 00:21:14.376 "code": -5, 00:21:14.376 "message": "Input/output error" 00:21:14.376 } 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1716249 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1716249 ']' 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1716249 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1716249 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1716249' 00:21:14.376 killing process with pid 1716249 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1716249 00:21:14.376 Received shutdown signal, test time was about 10.000000 seconds 00:21:14.376 00:21:14.376 Latency(us) 00:21:14.376 [2024-11-20T07:18:28.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:14.376 [2024-11-20T07:18:28.404Z] =================================================================================================================== 00:21:14.376 [2024-11-20T07:18:28.404Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1716249 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.G7KZOvugoV 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.G7KZOvugoV 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.G7KZOvugoV 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.G7KZOvugoV 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1716397 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1716397 /var/tmp/bdevperf.sock 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1716397 ']' 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:14.376 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:14.376 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:14.635 [2024-11-20 08:18:28.442373] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:21:14.636 [2024-11-20 08:18:28.442424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1716397 ] 00:21:14.636 [2024-11-20 08:18:28.511962] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:14.636 [2024-11-20 08:18:28.549447] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:14.636 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:14.636 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:14.636 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.G7KZOvugoV 00:21:14.894 08:18:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:21:15.153 [2024-11-20 08:18:29.015750] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:15.153 [2024-11-20 08:18:29.024684] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:15.153 [2024-11-20 08:18:29.024706] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:21:15.153 [2024-11-20 08:18:29.024729] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:15.153 [2024-11-20 08:18:29.025118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf9170 (107): Transport endpoint is not connected 00:21:15.153 [2024-11-20 08:18:29.026112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaf9170 (9): Bad file descriptor 00:21:15.153 [2024-11-20 08:18:29.027114] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:21:15.153 [2024-11-20 08:18:29.027133] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:15.153 [2024-11-20 08:18:29.027141] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:21:15.153 [2024-11-20 08:18:29.027151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:21:15.153 request: 00:21:15.153 { 00:21:15.153 "name": "TLSTEST", 00:21:15.153 "trtype": "tcp", 00:21:15.153 "traddr": "10.0.0.2", 00:21:15.153 "adrfam": "ipv4", 00:21:15.153 "trsvcid": "4420", 00:21:15.153 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:15.153 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:15.153 "prchk_reftag": false, 00:21:15.153 "prchk_guard": false, 00:21:15.153 "hdgst": false, 00:21:15.153 "ddgst": false, 00:21:15.153 "psk": "key0", 00:21:15.153 "allow_unrecognized_csi": false, 00:21:15.153 "method": "bdev_nvme_attach_controller", 00:21:15.153 "req_id": 1 00:21:15.153 } 00:21:15.153 Got JSON-RPC error response 00:21:15.153 response: 00:21:15.153 { 00:21:15.153 "code": -5, 00:21:15.153 "message": "Input/output error" 00:21:15.153 } 00:21:15.153 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1716397 00:21:15.153 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1716397 ']' 00:21:15.153 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1716397 00:21:15.153 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:15.153 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:15.153 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1716397 00:21:15.153 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:15.153 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:15.153 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1716397' 00:21:15.153 killing process with pid 1716397 00:21:15.153 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1716397 00:21:15.153 Received shutdown signal, test time was about 10.000000 seconds 00:21:15.153 00:21:15.153 Latency(us) 00:21:15.153 [2024-11-20T07:18:29.181Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:15.153 [2024-11-20T07:18:29.181Z] =================================================================================================================== 00:21:15.153 [2024-11-20T07:18:29.181Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:15.153 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1716397 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.G7KZOvugoV 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.G7KZOvugoV 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.G7KZOvugoV 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.G7KZOvugoV 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1716631 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1716631 /var/tmp/bdevperf.sock 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1716631 ']' 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:15.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:15.412 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:15.412 [2024-11-20 08:18:29.308069] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:21:15.412 [2024-11-20 08:18:29.308115] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1716631 ] 00:21:15.412 [2024-11-20 08:18:29.383466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:15.412 [2024-11-20 08:18:29.425374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:15.672 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:15.672 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:15.672 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.G7KZOvugoV 00:21:15.931 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:15.931 [2024-11-20 08:18:29.868083] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:15.931 [2024-11-20 08:18:29.876820] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:15.931 [2024-11-20 08:18:29.876841] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:21:15.931 [2024-11-20 08:18:29.876863] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:21:15.931 [2024-11-20 08:18:29.877405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240f170 (107): Transport endpoint is not connected 00:21:15.931 [2024-11-20 08:18:29.878399] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240f170 (9): Bad file descriptor 00:21:15.931 [2024-11-20 08:18:29.879401] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:21:15.931 [2024-11-20 08:18:29.879412] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:21:15.931 [2024-11-20 08:18:29.879418] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:21:15.931 [2024-11-20 08:18:29.879429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:21:15.931 request: 00:21:15.931 { 00:21:15.931 "name": "TLSTEST", 00:21:15.931 "trtype": "tcp", 00:21:15.931 "traddr": "10.0.0.2", 00:21:15.931 "adrfam": "ipv4", 00:21:15.931 "trsvcid": "4420", 00:21:15.931 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:15.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:15.931 "prchk_reftag": false, 00:21:15.931 "prchk_guard": false, 00:21:15.931 "hdgst": false, 00:21:15.931 "ddgst": false, 00:21:15.931 "psk": "key0", 00:21:15.931 "allow_unrecognized_csi": false, 00:21:15.931 "method": "bdev_nvme_attach_controller", 00:21:15.931 "req_id": 1 00:21:15.931 } 00:21:15.931 Got JSON-RPC error response 00:21:15.931 response: 00:21:15.931 { 00:21:15.931 "code": -5, 00:21:15.931 "message": "Input/output error" 00:21:15.931 } 00:21:15.931 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1716631 00:21:15.931 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1716631 ']' 00:21:15.931 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1716631 00:21:15.931 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:15.931 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:15.931 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1716631 00:21:16.191 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:16.191 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:16.191 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1716631' 00:21:16.191 killing process with pid 1716631 00:21:16.191 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1716631 00:21:16.191 Received shutdown signal, test time was about 10.000000 seconds 00:21:16.191 00:21:16.191 Latency(us) 00:21:16.191 [2024-11-20T07:18:30.219Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.191 [2024-11-20T07:18:30.219Z] =================================================================================================================== 00:21:16.191 [2024-11-20T07:18:30.219Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:16.191 08:18:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1716631 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1716645 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1716645 /var/tmp/bdevperf.sock 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1716645 ']' 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:16.191 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:16.191 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:16.191 [2024-11-20 08:18:30.163746] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:21:16.191 [2024-11-20 08:18:30.163798] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1716645 ] 00:21:16.450 [2024-11-20 08:18:30.242313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:16.450 [2024-11-20 08:18:30.280305] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:16.450 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:16.450 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:16.450 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:21:16.709 [2024-11-20 08:18:30.550129] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:21:16.709 [2024-11-20 08:18:30.550164] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:16.709 request: 00:21:16.709 { 00:21:16.709 "name": "key0", 00:21:16.709 "path": "", 00:21:16.709 "method": "keyring_file_add_key", 00:21:16.709 "req_id": 1 00:21:16.709 } 00:21:16.709 Got JSON-RPC error response 00:21:16.709 response: 00:21:16.709 { 00:21:16.709 "code": -1, 00:21:16.709 "message": "Operation not permitted" 00:21:16.709 } 00:21:16.709 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:16.968 [2024-11-20 08:18:30.750747] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:16.968 [2024-11-20 08:18:30.750776] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:16.968 request: 00:21:16.968 { 00:21:16.968 "name": "TLSTEST", 00:21:16.968 "trtype": "tcp", 00:21:16.968 "traddr": "10.0.0.2", 00:21:16.968 "adrfam": "ipv4", 00:21:16.968 "trsvcid": "4420", 00:21:16.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:16.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:16.968 "prchk_reftag": false, 00:21:16.968 "prchk_guard": false, 00:21:16.968 "hdgst": false, 00:21:16.968 "ddgst": false, 00:21:16.968 "psk": "key0", 00:21:16.968 "allow_unrecognized_csi": false, 00:21:16.968 "method": "bdev_nvme_attach_controller", 00:21:16.968 "req_id": 1 00:21:16.968 } 00:21:16.968 Got JSON-RPC error response 00:21:16.968 response: 00:21:16.968 { 00:21:16.968 "code": -126, 00:21:16.968 "message": "Required key not available" 00:21:16.968 } 00:21:16.968 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1716645 00:21:16.968 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1716645 ']' 00:21:16.968 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1716645 00:21:16.968 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:16.968 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.968 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1716645 00:21:16.968 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:16.968 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:16.968 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1716645' 00:21:16.968 killing process with pid 1716645 00:21:16.968 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1716645 00:21:16.968 Received shutdown signal, test time was about 10.000000 seconds 00:21:16.968 00:21:16.968 Latency(us) 00:21:16.968 [2024-11-20T07:18:30.996Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:16.968 [2024-11-20T07:18:30.996Z] =================================================================================================================== 00:21:16.968 [2024-11-20T07:18:30.996Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:16.968 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1716645 00:21:16.968 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:16.968 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:16.968 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:16.968 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:16.969 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:16.969 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 1711557 00:21:16.969 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1711557 ']' 00:21:16.969 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1711557 00:21:16.969 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:16.969 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.969 08:18:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1711557 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1711557' 00:21:17.228 killing process with pid 1711557 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1711557 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1711557 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=2 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.5dhRnXSWg5 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.5dhRnXSWg5 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=1716890 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 1716890 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1716890 ']' 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:17.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:17.228 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.488 [2024-11-20 08:18:31.279260] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:21:17.488 [2024-11-20 08:18:31.279311] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:17.488 [2024-11-20 08:18:31.356034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:17.488 [2024-11-20 08:18:31.392794] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:17.488 [2024-11-20 08:18:31.392827] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:17.488 [2024-11-20 08:18:31.392834] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:17.488 [2024-11-20 08:18:31.392839] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:17.488 [2024-11-20 08:18:31.392845] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:17.488 [2024-11-20 08:18:31.393401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:17.488 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:17.488 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:17.488 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:17.488 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:17.488 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:17.747 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:17.747 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.5dhRnXSWg5 00:21:17.747 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5dhRnXSWg5 00:21:17.747 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:17.747 [2024-11-20 08:18:31.707959] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:17.747 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:18.005 08:18:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:18.265 [2024-11-20 08:18:32.108991] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:18.265 [2024-11-20 08:18:32.109430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:18.265 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:18.524 malloc0 00:21:18.524 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:18.524 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5dhRnXSWg5 00:21:18.783 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:19.043 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5dhRnXSWg5 00:21:19.043 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:19.043 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:19.043 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:19.043 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5dhRnXSWg5 00:21:19.043 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:19.043 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:19.043 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1717152 00:21:19.043 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:19.043 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1717152 /var/tmp/bdevperf.sock 00:21:19.043 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1717152 ']' 00:21:19.043 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:19.043 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:19.043 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:19.043 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:19.043 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:19.043 08:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:19.043 [2024-11-20 08:18:32.944803] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:21:19.043 [2024-11-20 08:18:32.944856] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1717152 ] 00:21:19.043 [2024-11-20 08:18:33.017015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:19.043 [2024-11-20 08:18:33.057441] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:19.302 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:19.302 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:19.302 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5dhRnXSWg5 00:21:19.561 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:19.561 [2024-11-20 08:18:33.527713] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:19.820 TLSTESTn1 00:21:19.820 08:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:19.820 Running I/O for 10 seconds... 00:21:21.694 5390.00 IOPS, 21.05 MiB/s [2024-11-20T07:18:37.101Z] 5456.50 IOPS, 21.31 MiB/s [2024-11-20T07:18:38.068Z] 5510.67 IOPS, 21.53 MiB/s [2024-11-20T07:18:39.007Z] 5469.25 IOPS, 21.36 MiB/s [2024-11-20T07:18:39.989Z] 5503.20 IOPS, 21.50 MiB/s [2024-11-20T07:18:40.926Z] 5485.33 IOPS, 21.43 MiB/s [2024-11-20T07:18:41.863Z] 5475.43 IOPS, 21.39 MiB/s [2024-11-20T07:18:42.799Z] 5493.62 IOPS, 21.46 MiB/s [2024-11-20T07:18:43.737Z] 5503.11 IOPS, 21.50 MiB/s [2024-11-20T07:18:43.997Z] 5500.40 IOPS, 21.49 MiB/s 00:21:29.969 Latency(us) 00:21:29.969 [2024-11-20T07:18:43.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.969 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:29.969 Verification LBA range: start 0x0 length 0x2000 00:21:29.969 TLSTESTn1 : 10.01 5505.67 21.51 0.00 0.00 23214.84 5398.92 41943.04 00:21:29.969 [2024-11-20T07:18:43.997Z] =================================================================================================================== 00:21:29.969 [2024-11-20T07:18:43.997Z] Total : 5505.67 21.51 0.00 0.00 23214.84 5398.92 41943.04 00:21:29.969 { 00:21:29.969 "results": [ 00:21:29.969 { 00:21:29.969 "job": "TLSTESTn1", 00:21:29.969 "core_mask": "0x4", 00:21:29.969 "workload": "verify", 00:21:29.969 "status": "finished", 00:21:29.969 "verify_range": { 00:21:29.969 "start": 0, 00:21:29.969 "length": 8192 00:21:29.969 }, 00:21:29.969 "queue_depth": 128, 00:21:29.969 "io_size": 4096, 00:21:29.969 "runtime": 10.01332, 00:21:29.969 "iops": 5505.666452285555, 00:21:29.969 "mibps": 21.50650957924045, 00:21:29.969 "io_failed": 0, 00:21:29.969 "io_timeout": 0, 00:21:29.969 "avg_latency_us": 23214.843721282163, 00:21:29.969 "min_latency_us": 5398.918095238095, 00:21:29.969 "max_latency_us": 41943.04 00:21:29.969 } 00:21:29.969 ], 00:21:29.969 "core_count": 1 00:21:29.969 } 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 1717152 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1717152 ']' 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1717152 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1717152 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1717152' 00:21:29.969 killing process with pid 1717152 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1717152 00:21:29.969 Received shutdown signal, test time was about 10.000000 seconds 00:21:29.969 00:21:29.969 Latency(us) 00:21:29.969 [2024-11-20T07:18:43.997Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:29.969 [2024-11-20T07:18:43.997Z] =================================================================================================================== 00:21:29.969 [2024-11-20T07:18:43.997Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1717152 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.5dhRnXSWg5 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5dhRnXSWg5 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5dhRnXSWg5 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.5dhRnXSWg5 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.5dhRnXSWg5 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=1719006 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 1719006 /var/tmp/bdevperf.sock 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1719006 ']' 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:29.969 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:29.969 08:18:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:30.229 [2024-11-20 08:18:44.029762] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:21:30.229 [2024-11-20 08:18:44.029811] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719006 ] 00:21:30.229 [2024-11-20 08:18:44.101972] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:30.229 [2024-11-20 08:18:44.139096] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:30.229 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:30.229 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:30.229 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5dhRnXSWg5 00:21:30.488 [2024-11-20 08:18:44.408854] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5dhRnXSWg5': 0100666 00:21:30.488 [2024-11-20 08:18:44.408889] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:30.488 request: 00:21:30.488 { 00:21:30.488 "name": "key0", 00:21:30.488 "path": "/tmp/tmp.5dhRnXSWg5", 00:21:30.488 "method": "keyring_file_add_key", 00:21:30.488 "req_id": 1 00:21:30.488 } 00:21:30.488 Got JSON-RPC error response 00:21:30.488 response: 00:21:30.488 { 00:21:30.488 "code": -1, 00:21:30.488 "message": "Operation not permitted" 00:21:30.488 } 00:21:30.488 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:30.747 [2024-11-20 08:18:44.621487] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:30.747 [2024-11-20 08:18:44.621519] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:21:30.747 request: 00:21:30.747 { 00:21:30.747 "name": "TLSTEST", 00:21:30.747 "trtype": "tcp", 00:21:30.747 "traddr": "10.0.0.2", 00:21:30.747 "adrfam": "ipv4", 00:21:30.747 "trsvcid": "4420", 00:21:30.747 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:30.747 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:30.747 "prchk_reftag": false, 00:21:30.747 "prchk_guard": false, 00:21:30.747 "hdgst": false, 00:21:30.747 "ddgst": false, 00:21:30.747 "psk": "key0", 00:21:30.747 "allow_unrecognized_csi": false, 00:21:30.747 "method": "bdev_nvme_attach_controller", 00:21:30.747 "req_id": 1 00:21:30.747 } 00:21:30.747 Got JSON-RPC error response 00:21:30.747 response: 00:21:30.747 { 00:21:30.747 "code": -126, 00:21:30.747 "message": "Required key not available" 00:21:30.747 } 00:21:30.747 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 1719006 00:21:30.747 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1719006 ']' 00:21:30.747 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1719006 00:21:30.747 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:30.747 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.747 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1719006 00:21:30.747 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:30.747 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:30.747 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1719006' 00:21:30.747 killing process with pid 1719006 00:21:30.747 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1719006 00:21:30.747 Received shutdown signal, test time was about 10.000000 seconds 00:21:30.747 00:21:30.747 Latency(us) 00:21:30.747 [2024-11-20T07:18:44.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.747 [2024-11-20T07:18:44.775Z] =================================================================================================================== 00:21:30.747 [2024-11-20T07:18:44.776Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:30.748 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1719006 00:21:31.007 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:21:31.007 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:31.007 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:31.007 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:31.007 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:31.007 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 1716890 00:21:31.007 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1716890 ']' 00:21:31.007 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1716890 00:21:31.007 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:31.007 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:31.007 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1716890 00:21:31.007 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:31.007 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:31.007 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1716890' 00:21:31.007 killing process with pid 1716890 00:21:31.007 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1716890 00:21:31.007 08:18:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1716890 00:21:31.266 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:21:31.266 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:31.266 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:31.266 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.266 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=1719246 00:21:31.266 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 1719246 00:21:31.266 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:31.266 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1719246 ']' 00:21:31.266 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:31.266 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:31.266 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:31.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:31.266 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:31.266 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:31.267 [2024-11-20 08:18:45.126916] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:21:31.267 [2024-11-20 08:18:45.126965] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:31.267 [2024-11-20 08:18:45.205974] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.267 [2024-11-20 08:18:45.246541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:31.267 [2024-11-20 08:18:45.246579] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:31.267 [2024-11-20 08:18:45.246586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:31.267 [2024-11-20 08:18:45.246595] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:31.267 [2024-11-20 08:18:45.246601] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:31.267 [2024-11-20 08:18:45.247171] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:32.204 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:32.204 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:32.204 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:32.204 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:32.205 08:18:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:32.205 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:32.205 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.5dhRnXSWg5 00:21:32.205 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:21:32.205 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.5dhRnXSWg5 00:21:32.205 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:21:32.205 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.205 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:21:32.205 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:32.205 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.5dhRnXSWg5 00:21:32.205 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5dhRnXSWg5 00:21:32.205 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:32.205 [2024-11-20 08:18:46.194791] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:32.205 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:32.463 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:32.722 [2024-11-20 08:18:46.583798] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:32.722 [2024-11-20 08:18:46.583981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:32.722 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:32.981 malloc0 00:21:32.981 08:18:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:33.241 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5dhRnXSWg5 00:21:33.241 [2024-11-20 08:18:47.177171] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.5dhRnXSWg5': 0100666 00:21:33.241 [2024-11-20 08:18:47.177195] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:21:33.241 request: 00:21:33.241 { 00:21:33.241 "name": "key0", 00:21:33.241 "path": "/tmp/tmp.5dhRnXSWg5", 00:21:33.241 "method": "keyring_file_add_key", 00:21:33.241 "req_id": 1 00:21:33.241 } 00:21:33.241 Got JSON-RPC error response 00:21:33.241 response: 00:21:33.241 { 00:21:33.241 "code": -1, 00:21:33.241 "message": "Operation not permitted" 00:21:33.241 } 00:21:33.242 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:33.501 [2024-11-20 08:18:47.373723] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:21:33.501 [2024-11-20 08:18:47.373760] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:21:33.501 request: 00:21:33.501 { 00:21:33.501 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:33.501 "host": "nqn.2016-06.io.spdk:host1", 00:21:33.501 "psk": "key0", 00:21:33.501 "method": "nvmf_subsystem_add_host", 00:21:33.501 "req_id": 1 00:21:33.501 } 00:21:33.501 Got JSON-RPC error response 00:21:33.501 response: 00:21:33.501 { 00:21:33.501 "code": -32603, 00:21:33.501 "message": "Internal error" 00:21:33.501 } 00:21:33.501 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:21:33.501 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:33.501 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:33.501 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:33.501 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 1719246 00:21:33.501 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1719246 ']' 00:21:33.501 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1719246 00:21:33.501 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:33.501 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:33.501 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1719246 00:21:33.501 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:33.501 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:33.501 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1719246' 00:21:33.501 killing process with pid 1719246 00:21:33.501 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1719246 00:21:33.501 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1719246 00:21:33.761 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.5dhRnXSWg5 00:21:33.761 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:21:33.761 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:33.761 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:33.761 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.761 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=1719730 00:21:33.761 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:33.761 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 1719730 00:21:33.761 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1719730 ']' 00:21:33.761 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.761 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:33.761 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.761 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:33.761 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:33.761 [2024-11-20 08:18:47.677616] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:21:33.761 [2024-11-20 08:18:47.677666] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.761 [2024-11-20 08:18:47.757981] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.020 [2024-11-20 08:18:47.798424] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.020 [2024-11-20 08:18:47.798456] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.020 [2024-11-20 08:18:47.798463] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.020 [2024-11-20 08:18:47.798469] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.020 [2024-11-20 08:18:47.798475] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.020 [2024-11-20 08:18:47.799010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:34.020 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:34.020 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:34.020 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:34.020 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:34.020 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:34.020 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.020 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.5dhRnXSWg5 00:21:34.021 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5dhRnXSWg5 00:21:34.021 08:18:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:34.280 [2024-11-20 08:18:48.105375] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.280 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:34.540 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:34.540 [2024-11-20 08:18:48.494374] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:34.540 [2024-11-20 08:18:48.494561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.540 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:34.800 malloc0 00:21:34.800 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:35.109 08:18:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5dhRnXSWg5 00:21:35.109 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:35.402 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:21:35.402 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=1719994 00:21:35.402 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:35.402 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 1719994 /var/tmp/bdevperf.sock 00:21:35.402 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1719994 ']' 00:21:35.402 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.402 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:35.402 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.402 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:35.402 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:35.402 [2024-11-20 08:18:49.367527] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:21:35.402 [2024-11-20 08:18:49.367577] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1719994 ] 00:21:35.660 [2024-11-20 08:18:49.440993] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:35.660 [2024-11-20 08:18:49.480869] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:35.660 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:35.660 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:35.660 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5dhRnXSWg5 00:21:35.919 08:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:36.177 [2024-11-20 08:18:49.943481] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:36.177 TLSTESTn1 00:21:36.177 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:21:36.437 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:21:36.437 "subsystems": [ 00:21:36.437 { 00:21:36.437 "subsystem": "keyring", 00:21:36.437 "config": [ 00:21:36.437 { 00:21:36.437 "method": "keyring_file_add_key", 00:21:36.437 "params": { 00:21:36.437 "name": "key0", 00:21:36.437 "path": "/tmp/tmp.5dhRnXSWg5" 00:21:36.437 } 00:21:36.437 } 00:21:36.437 ] 00:21:36.437 }, 00:21:36.437 { 00:21:36.437 "subsystem": "iobuf", 00:21:36.437 "config": [ 00:21:36.437 { 00:21:36.437 "method": "iobuf_set_options", 00:21:36.437 "params": { 00:21:36.437 "small_pool_count": 8192, 00:21:36.437 "large_pool_count": 1024, 00:21:36.437 "small_bufsize": 8192, 00:21:36.437 "large_bufsize": 135168, 00:21:36.437 "enable_numa": false 00:21:36.437 } 00:21:36.437 } 00:21:36.437 ] 00:21:36.437 }, 00:21:36.437 { 00:21:36.437 "subsystem": "sock", 00:21:36.437 "config": [ 00:21:36.437 { 00:21:36.437 "method": "sock_set_default_impl", 00:21:36.437 "params": { 00:21:36.437 "impl_name": "posix" 00:21:36.437 } 00:21:36.437 }, 00:21:36.437 { 00:21:36.437 "method": "sock_impl_set_options", 00:21:36.437 "params": { 00:21:36.437 "impl_name": "ssl", 00:21:36.437 "recv_buf_size": 4096, 00:21:36.437 "send_buf_size": 4096, 00:21:36.437 "enable_recv_pipe": true, 00:21:36.437 "enable_quickack": false, 00:21:36.437 "enable_placement_id": 0, 00:21:36.437 "enable_zerocopy_send_server": true, 00:21:36.437 "enable_zerocopy_send_client": false, 00:21:36.437 "zerocopy_threshold": 0, 00:21:36.437 "tls_version": 0, 00:21:36.437 "enable_ktls": false 00:21:36.437 } 00:21:36.437 }, 00:21:36.437 { 00:21:36.437 "method": "sock_impl_set_options", 00:21:36.437 "params": { 00:21:36.437 "impl_name": "posix", 00:21:36.437 "recv_buf_size": 2097152, 00:21:36.437 "send_buf_size": 2097152, 00:21:36.437 "enable_recv_pipe": true, 00:21:36.437 "enable_quickack": false, 00:21:36.437 "enable_placement_id": 0, 00:21:36.437 "enable_zerocopy_send_server": true, 00:21:36.437 "enable_zerocopy_send_client": false, 00:21:36.437 "zerocopy_threshold": 0, 00:21:36.437 "tls_version": 0, 00:21:36.437 "enable_ktls": false 00:21:36.437 } 00:21:36.437 } 00:21:36.437 ] 00:21:36.437 }, 00:21:36.437 { 00:21:36.437 "subsystem": "vmd", 00:21:36.437 "config": [] 00:21:36.437 }, 00:21:36.437 { 00:21:36.437 "subsystem": "accel", 00:21:36.437 "config": [ 00:21:36.437 { 00:21:36.437 "method": "accel_set_options", 00:21:36.437 "params": { 00:21:36.437 "small_cache_size": 128, 00:21:36.437 "large_cache_size": 16, 00:21:36.437 "task_count": 2048, 00:21:36.437 "sequence_count": 2048, 00:21:36.437 "buf_count": 2048 00:21:36.437 } 00:21:36.437 } 00:21:36.437 ] 00:21:36.437 }, 00:21:36.437 { 00:21:36.437 "subsystem": "bdev", 00:21:36.437 "config": [ 00:21:36.437 { 00:21:36.437 "method": "bdev_set_options", 00:21:36.437 "params": { 00:21:36.437 "bdev_io_pool_size": 65535, 00:21:36.437 "bdev_io_cache_size": 256, 00:21:36.437 "bdev_auto_examine": true, 00:21:36.437 "iobuf_small_cache_size": 128, 00:21:36.437 "iobuf_large_cache_size": 16 00:21:36.437 } 00:21:36.437 }, 00:21:36.437 { 00:21:36.437 "method": "bdev_raid_set_options", 00:21:36.437 "params": { 00:21:36.437 "process_window_size_kb": 1024, 00:21:36.437 "process_max_bandwidth_mb_sec": 0 00:21:36.437 } 00:21:36.437 }, 00:21:36.437 { 00:21:36.437 "method": "bdev_iscsi_set_options", 00:21:36.437 "params": { 00:21:36.437 "timeout_sec": 30 00:21:36.437 } 00:21:36.437 }, 00:21:36.437 { 00:21:36.437 "method": "bdev_nvme_set_options", 00:21:36.437 "params": { 00:21:36.437 "action_on_timeout": "none", 00:21:36.437 "timeout_us": 0, 00:21:36.437 "timeout_admin_us": 0, 00:21:36.437 "keep_alive_timeout_ms": 10000, 00:21:36.437 "arbitration_burst": 0, 00:21:36.437 "low_priority_weight": 0, 00:21:36.437 "medium_priority_weight": 0, 00:21:36.437 "high_priority_weight": 0, 00:21:36.437 "nvme_adminq_poll_period_us": 10000, 00:21:36.437 "nvme_ioq_poll_period_us": 0, 00:21:36.437 "io_queue_requests": 0, 00:21:36.437 "delay_cmd_submit": true, 00:21:36.437 "transport_retry_count": 4, 00:21:36.437 "bdev_retry_count": 3, 00:21:36.437 "transport_ack_timeout": 0, 00:21:36.437 "ctrlr_loss_timeout_sec": 0, 00:21:36.437 "reconnect_delay_sec": 0, 00:21:36.437 "fast_io_fail_timeout_sec": 0, 00:21:36.437 "disable_auto_failback": false, 00:21:36.437 "generate_uuids": false, 00:21:36.437 "transport_tos": 0, 00:21:36.437 "nvme_error_stat": false, 00:21:36.437 "rdma_srq_size": 0, 00:21:36.437 "io_path_stat": false, 00:21:36.437 "allow_accel_sequence": false, 00:21:36.437 "rdma_max_cq_size": 0, 00:21:36.437 "rdma_cm_event_timeout_ms": 0, 00:21:36.437 "dhchap_digests": [ 00:21:36.437 "sha256", 00:21:36.437 "sha384", 00:21:36.437 "sha512" 00:21:36.437 ], 00:21:36.437 "dhchap_dhgroups": [ 00:21:36.437 "null", 00:21:36.437 "ffdhe2048", 00:21:36.437 "ffdhe3072", 00:21:36.437 "ffdhe4096", 00:21:36.437 "ffdhe6144", 00:21:36.437 "ffdhe8192" 00:21:36.437 ] 00:21:36.437 } 00:21:36.437 }, 00:21:36.437 { 00:21:36.437 "method": "bdev_nvme_set_hotplug", 00:21:36.437 "params": { 00:21:36.437 "period_us": 100000, 00:21:36.437 "enable": false 00:21:36.437 } 00:21:36.437 }, 00:21:36.437 { 00:21:36.437 "method": "bdev_malloc_create", 00:21:36.437 "params": { 00:21:36.437 "name": "malloc0", 00:21:36.437 "num_blocks": 8192, 00:21:36.437 "block_size": 4096, 00:21:36.437 "physical_block_size": 4096, 00:21:36.437 "uuid": "9e88bcbb-aa79-459f-951c-793c8de8af65", 00:21:36.437 "optimal_io_boundary": 0, 00:21:36.437 "md_size": 0, 00:21:36.437 "dif_type": 0, 00:21:36.437 "dif_is_head_of_md": false, 00:21:36.437 "dif_pi_format": 0 00:21:36.437 } 00:21:36.437 }, 00:21:36.437 { 00:21:36.437 "method": "bdev_wait_for_examine" 00:21:36.437 } 00:21:36.437 ] 00:21:36.437 }, 00:21:36.437 { 00:21:36.437 "subsystem": "nbd", 00:21:36.437 "config": [] 00:21:36.437 }, 00:21:36.437 { 00:21:36.437 "subsystem": "scheduler", 00:21:36.438 "config": [ 00:21:36.438 { 00:21:36.438 "method": "framework_set_scheduler", 00:21:36.438 "params": { 00:21:36.438 "name": "static" 00:21:36.438 } 00:21:36.438 } 00:21:36.438 ] 00:21:36.438 }, 00:21:36.438 { 00:21:36.438 "subsystem": "nvmf", 00:21:36.438 "config": [ 00:21:36.438 { 00:21:36.438 "method": "nvmf_set_config", 00:21:36.438 "params": { 00:21:36.438 "discovery_filter": "match_any", 00:21:36.438 "admin_cmd_passthru": { 00:21:36.438 "identify_ctrlr": false 00:21:36.438 }, 00:21:36.438 "dhchap_digests": [ 00:21:36.438 "sha256", 00:21:36.438 "sha384", 00:21:36.438 "sha512" 00:21:36.438 ], 00:21:36.438 "dhchap_dhgroups": [ 00:21:36.438 "null", 00:21:36.438 "ffdhe2048", 00:21:36.438 "ffdhe3072", 00:21:36.438 "ffdhe4096", 00:21:36.438 "ffdhe6144", 00:21:36.438 "ffdhe8192" 00:21:36.438 ] 00:21:36.438 } 00:21:36.438 }, 00:21:36.438 { 00:21:36.438 "method": "nvmf_set_max_subsystems", 00:21:36.438 "params": { 00:21:36.438 "max_subsystems": 1024 00:21:36.438 } 00:21:36.438 }, 00:21:36.438 { 00:21:36.438 "method": "nvmf_set_crdt", 00:21:36.438 "params": { 00:21:36.438 "crdt1": 0, 00:21:36.438 "crdt2": 0, 00:21:36.438 "crdt3": 0 00:21:36.438 } 00:21:36.438 }, 00:21:36.438 { 00:21:36.438 "method": "nvmf_create_transport", 00:21:36.438 "params": { 00:21:36.438 "trtype": "TCP", 00:21:36.438 "max_queue_depth": 128, 00:21:36.438 "max_io_qpairs_per_ctrlr": 127, 00:21:36.438 "in_capsule_data_size": 4096, 00:21:36.438 "max_io_size": 131072, 00:21:36.438 "io_unit_size": 131072, 00:21:36.438 "max_aq_depth": 128, 00:21:36.438 "num_shared_buffers": 511, 00:21:36.438 "buf_cache_size": 4294967295, 00:21:36.438 "dif_insert_or_strip": false, 00:21:36.438 "zcopy": false, 00:21:36.438 "c2h_success": false, 00:21:36.438 "sock_priority": 0, 00:21:36.438 "abort_timeout_sec": 1, 00:21:36.438 "ack_timeout": 0, 00:21:36.438 "data_wr_pool_size": 0 00:21:36.438 } 00:21:36.438 }, 00:21:36.438 { 00:21:36.438 "method": "nvmf_create_subsystem", 00:21:36.438 "params": { 00:21:36.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.438 "allow_any_host": false, 00:21:36.438 "serial_number": "SPDK00000000000001", 00:21:36.438 "model_number": "SPDK bdev Controller", 00:21:36.438 "max_namespaces": 10, 00:21:36.438 "min_cntlid": 1, 00:21:36.438 "max_cntlid": 65519, 00:21:36.438 "ana_reporting": false 00:21:36.438 } 00:21:36.438 }, 00:21:36.438 { 00:21:36.438 "method": "nvmf_subsystem_add_host", 00:21:36.438 "params": { 00:21:36.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.438 "host": "nqn.2016-06.io.spdk:host1", 00:21:36.438 "psk": "key0" 00:21:36.438 } 00:21:36.438 }, 00:21:36.438 { 00:21:36.438 "method": "nvmf_subsystem_add_ns", 00:21:36.438 "params": { 00:21:36.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.438 "namespace": { 00:21:36.438 "nsid": 1, 00:21:36.438 "bdev_name": "malloc0", 00:21:36.438 "nguid": "9E88BCBBAA79459F951C793C8DE8AF65", 00:21:36.438 "uuid": "9e88bcbb-aa79-459f-951c-793c8de8af65", 00:21:36.438 "no_auto_visible": false 00:21:36.438 } 00:21:36.438 } 00:21:36.438 }, 00:21:36.438 { 00:21:36.438 "method": "nvmf_subsystem_add_listener", 00:21:36.438 "params": { 00:21:36.438 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.438 "listen_address": { 00:21:36.438 "trtype": "TCP", 00:21:36.438 "adrfam": "IPv4", 00:21:36.438 "traddr": "10.0.0.2", 00:21:36.438 "trsvcid": "4420" 00:21:36.438 }, 00:21:36.438 "secure_channel": true 00:21:36.438 } 00:21:36.438 } 00:21:36.438 ] 00:21:36.438 } 00:21:36.438 ] 00:21:36.438 }' 00:21:36.438 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:36.698 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:21:36.698 "subsystems": [ 00:21:36.698 { 00:21:36.698 "subsystem": "keyring", 00:21:36.698 "config": [ 00:21:36.698 { 00:21:36.698 "method": "keyring_file_add_key", 00:21:36.698 "params": { 00:21:36.698 "name": "key0", 00:21:36.698 "path": "/tmp/tmp.5dhRnXSWg5" 00:21:36.698 } 00:21:36.698 } 00:21:36.698 ] 00:21:36.698 }, 00:21:36.698 { 00:21:36.698 "subsystem": "iobuf", 00:21:36.698 "config": [ 00:21:36.698 { 00:21:36.698 "method": "iobuf_set_options", 00:21:36.698 "params": { 00:21:36.698 "small_pool_count": 8192, 00:21:36.698 "large_pool_count": 1024, 00:21:36.698 "small_bufsize": 8192, 00:21:36.698 "large_bufsize": 135168, 00:21:36.698 "enable_numa": false 00:21:36.698 } 00:21:36.698 } 00:21:36.698 ] 00:21:36.698 }, 00:21:36.698 { 00:21:36.698 "subsystem": "sock", 00:21:36.698 "config": [ 00:21:36.698 { 00:21:36.698 "method": "sock_set_default_impl", 00:21:36.698 "params": { 00:21:36.698 "impl_name": "posix" 00:21:36.698 } 00:21:36.698 }, 00:21:36.698 { 00:21:36.698 "method": "sock_impl_set_options", 00:21:36.698 "params": { 00:21:36.698 "impl_name": "ssl", 00:21:36.698 "recv_buf_size": 4096, 00:21:36.698 "send_buf_size": 4096, 00:21:36.698 "enable_recv_pipe": true, 00:21:36.698 "enable_quickack": false, 00:21:36.698 "enable_placement_id": 0, 00:21:36.698 "enable_zerocopy_send_server": true, 00:21:36.698 "enable_zerocopy_send_client": false, 00:21:36.698 "zerocopy_threshold": 0, 00:21:36.698 "tls_version": 0, 00:21:36.698 "enable_ktls": false 00:21:36.698 } 00:21:36.698 }, 00:21:36.698 { 00:21:36.698 "method": "sock_impl_set_options", 00:21:36.698 "params": { 00:21:36.698 "impl_name": "posix", 00:21:36.698 "recv_buf_size": 2097152, 00:21:36.698 "send_buf_size": 2097152, 00:21:36.698 "enable_recv_pipe": true, 00:21:36.698 "enable_quickack": false, 00:21:36.698 "enable_placement_id": 0, 00:21:36.698 "enable_zerocopy_send_server": true, 00:21:36.698 "enable_zerocopy_send_client": false, 00:21:36.698 "zerocopy_threshold": 0, 00:21:36.698 "tls_version": 0, 00:21:36.698 "enable_ktls": false 00:21:36.698 } 00:21:36.698 } 00:21:36.698 ] 00:21:36.698 }, 00:21:36.698 { 00:21:36.698 "subsystem": "vmd", 00:21:36.698 "config": [] 00:21:36.698 }, 00:21:36.698 { 00:21:36.698 "subsystem": "accel", 00:21:36.698 "config": [ 00:21:36.698 { 00:21:36.698 "method": "accel_set_options", 00:21:36.698 "params": { 00:21:36.698 "small_cache_size": 128, 00:21:36.698 "large_cache_size": 16, 00:21:36.698 "task_count": 2048, 00:21:36.698 "sequence_count": 2048, 00:21:36.698 "buf_count": 2048 00:21:36.698 } 00:21:36.698 } 00:21:36.698 ] 00:21:36.698 }, 00:21:36.698 { 00:21:36.698 "subsystem": "bdev", 00:21:36.698 "config": [ 00:21:36.698 { 00:21:36.698 "method": "bdev_set_options", 00:21:36.698 "params": { 00:21:36.698 "bdev_io_pool_size": 65535, 00:21:36.698 "bdev_io_cache_size": 256, 00:21:36.698 "bdev_auto_examine": true, 00:21:36.698 "iobuf_small_cache_size": 128, 00:21:36.698 "iobuf_large_cache_size": 16 00:21:36.698 } 00:21:36.698 }, 00:21:36.698 { 00:21:36.698 "method": "bdev_raid_set_options", 00:21:36.698 "params": { 00:21:36.698 "process_window_size_kb": 1024, 00:21:36.698 "process_max_bandwidth_mb_sec": 0 00:21:36.698 } 00:21:36.698 }, 00:21:36.698 { 00:21:36.698 "method": "bdev_iscsi_set_options", 00:21:36.698 "params": { 00:21:36.698 "timeout_sec": 30 00:21:36.698 } 00:21:36.698 }, 00:21:36.698 { 00:21:36.698 "method": "bdev_nvme_set_options", 00:21:36.698 "params": { 00:21:36.698 "action_on_timeout": "none", 00:21:36.698 "timeout_us": 0, 00:21:36.698 "timeout_admin_us": 0, 00:21:36.698 "keep_alive_timeout_ms": 10000, 00:21:36.698 "arbitration_burst": 0, 00:21:36.698 "low_priority_weight": 0, 00:21:36.698 "medium_priority_weight": 0, 00:21:36.698 "high_priority_weight": 0, 00:21:36.698 "nvme_adminq_poll_period_us": 10000, 00:21:36.698 "nvme_ioq_poll_period_us": 0, 00:21:36.698 "io_queue_requests": 512, 00:21:36.699 "delay_cmd_submit": true, 00:21:36.699 "transport_retry_count": 4, 00:21:36.699 "bdev_retry_count": 3, 00:21:36.699 "transport_ack_timeout": 0, 00:21:36.699 "ctrlr_loss_timeout_sec": 0, 00:21:36.699 "reconnect_delay_sec": 0, 00:21:36.699 "fast_io_fail_timeout_sec": 0, 00:21:36.699 "disable_auto_failback": false, 00:21:36.699 "generate_uuids": false, 00:21:36.699 "transport_tos": 0, 00:21:36.699 "nvme_error_stat": false, 00:21:36.699 "rdma_srq_size": 0, 00:21:36.699 "io_path_stat": false, 00:21:36.699 "allow_accel_sequence": false, 00:21:36.699 "rdma_max_cq_size": 0, 00:21:36.699 "rdma_cm_event_timeout_ms": 0, 00:21:36.699 "dhchap_digests": [ 00:21:36.699 "sha256", 00:21:36.699 "sha384", 00:21:36.699 "sha512" 00:21:36.699 ], 00:21:36.699 "dhchap_dhgroups": [ 00:21:36.699 "null", 00:21:36.699 "ffdhe2048", 00:21:36.699 "ffdhe3072", 00:21:36.699 "ffdhe4096", 00:21:36.699 "ffdhe6144", 00:21:36.699 "ffdhe8192" 00:21:36.699 ] 00:21:36.699 } 00:21:36.699 }, 00:21:36.699 { 00:21:36.699 "method": "bdev_nvme_attach_controller", 00:21:36.699 "params": { 00:21:36.699 "name": "TLSTEST", 00:21:36.699 "trtype": "TCP", 00:21:36.699 "adrfam": "IPv4", 00:21:36.699 "traddr": "10.0.0.2", 00:21:36.699 "trsvcid": "4420", 00:21:36.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.699 "prchk_reftag": false, 00:21:36.699 "prchk_guard": false, 00:21:36.699 "ctrlr_loss_timeout_sec": 0, 00:21:36.699 "reconnect_delay_sec": 0, 00:21:36.699 "fast_io_fail_timeout_sec": 0, 00:21:36.699 "psk": "key0", 00:21:36.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:36.699 "hdgst": false, 00:21:36.699 "ddgst": false, 00:21:36.699 "multipath": "multipath" 00:21:36.699 } 00:21:36.699 }, 00:21:36.699 { 00:21:36.699 "method": "bdev_nvme_set_hotplug", 00:21:36.699 "params": { 00:21:36.699 "period_us": 100000, 00:21:36.699 "enable": false 00:21:36.699 } 00:21:36.699 }, 00:21:36.699 { 00:21:36.699 "method": "bdev_wait_for_examine" 00:21:36.699 } 00:21:36.699 ] 00:21:36.699 }, 00:21:36.699 { 00:21:36.699 "subsystem": "nbd", 00:21:36.699 "config": [] 00:21:36.699 } 00:21:36.699 ] 00:21:36.699 }' 00:21:36.699 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 1719994 00:21:36.699 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1719994 ']' 00:21:36.699 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1719994 00:21:36.699 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:36.699 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:36.699 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1719994 00:21:36.699 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:36.699 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:36.699 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1719994' 00:21:36.699 killing process with pid 1719994 00:21:36.699 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1719994 00:21:36.699 Received shutdown signal, test time was about 10.000000 seconds 00:21:36.699 00:21:36.699 Latency(us) 00:21:36.699 [2024-11-20T07:18:50.727Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:36.699 [2024-11-20T07:18:50.727Z] =================================================================================================================== 00:21:36.699 [2024-11-20T07:18:50.727Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:21:36.699 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1719994 00:21:36.958 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 1719730 00:21:36.958 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1719730 ']' 00:21:36.958 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1719730 00:21:36.958 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:36.958 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:36.958 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1719730 00:21:36.958 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:36.958 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:36.958 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1719730' 00:21:36.958 killing process with pid 1719730 00:21:36.958 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1719730 00:21:36.958 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1719730 00:21:37.218 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:21:37.218 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:37.218 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:37.218 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:21:37.218 "subsystems": [ 00:21:37.218 { 00:21:37.218 "subsystem": "keyring", 00:21:37.218 "config": [ 00:21:37.218 { 00:21:37.218 "method": "keyring_file_add_key", 00:21:37.218 "params": { 00:21:37.218 "name": "key0", 00:21:37.218 "path": "/tmp/tmp.5dhRnXSWg5" 00:21:37.218 } 00:21:37.218 } 00:21:37.218 ] 00:21:37.218 }, 00:21:37.218 { 00:21:37.218 "subsystem": "iobuf", 00:21:37.218 "config": [ 00:21:37.218 { 00:21:37.218 "method": "iobuf_set_options", 00:21:37.218 "params": { 00:21:37.218 "small_pool_count": 8192, 00:21:37.218 "large_pool_count": 1024, 00:21:37.218 "small_bufsize": 8192, 00:21:37.218 "large_bufsize": 135168, 00:21:37.218 "enable_numa": false 00:21:37.218 } 00:21:37.218 } 00:21:37.218 ] 00:21:37.218 }, 00:21:37.218 { 00:21:37.218 "subsystem": "sock", 00:21:37.218 "config": [ 00:21:37.218 { 00:21:37.218 "method": "sock_set_default_impl", 00:21:37.218 "params": { 00:21:37.218 "impl_name": "posix" 00:21:37.218 } 00:21:37.218 }, 00:21:37.218 { 00:21:37.218 "method": "sock_impl_set_options", 00:21:37.218 "params": { 00:21:37.218 "impl_name": "ssl", 00:21:37.218 "recv_buf_size": 4096, 00:21:37.218 "send_buf_size": 4096, 00:21:37.218 "enable_recv_pipe": true, 00:21:37.218 "enable_quickack": false, 00:21:37.218 "enable_placement_id": 0, 00:21:37.218 "enable_zerocopy_send_server": true, 00:21:37.218 "enable_zerocopy_send_client": false, 00:21:37.218 "zerocopy_threshold": 0, 00:21:37.218 "tls_version": 0, 00:21:37.218 "enable_ktls": false 00:21:37.218 } 00:21:37.218 }, 00:21:37.218 { 00:21:37.218 "method": "sock_impl_set_options", 00:21:37.218 "params": { 00:21:37.218 "impl_name": "posix", 00:21:37.218 "recv_buf_size": 2097152, 00:21:37.218 "send_buf_size": 2097152, 00:21:37.218 "enable_recv_pipe": true, 00:21:37.218 "enable_quickack": false, 00:21:37.218 "enable_placement_id": 0, 00:21:37.218 "enable_zerocopy_send_server": true, 00:21:37.218 "enable_zerocopy_send_client": false, 00:21:37.218 "zerocopy_threshold": 0, 00:21:37.218 "tls_version": 0, 00:21:37.218 "enable_ktls": false 00:21:37.218 } 00:21:37.218 } 00:21:37.218 ] 00:21:37.218 }, 00:21:37.218 { 00:21:37.218 "subsystem": "vmd", 00:21:37.218 "config": [] 00:21:37.218 }, 00:21:37.218 { 00:21:37.218 "subsystem": "accel", 00:21:37.218 "config": [ 00:21:37.218 { 00:21:37.218 "method": "accel_set_options", 00:21:37.218 "params": { 00:21:37.218 "small_cache_size": 128, 00:21:37.218 "large_cache_size": 16, 00:21:37.218 "task_count": 2048, 00:21:37.218 "sequence_count": 2048, 00:21:37.218 "buf_count": 2048 00:21:37.218 } 00:21:37.218 } 00:21:37.218 ] 00:21:37.218 }, 00:21:37.218 { 00:21:37.218 "subsystem": "bdev", 00:21:37.218 "config": [ 00:21:37.218 { 00:21:37.218 "method": "bdev_set_options", 00:21:37.218 "params": { 00:21:37.218 "bdev_io_pool_size": 65535, 00:21:37.218 "bdev_io_cache_size": 256, 00:21:37.218 "bdev_auto_examine": true, 00:21:37.218 "iobuf_small_cache_size": 128, 00:21:37.218 "iobuf_large_cache_size": 16 00:21:37.218 } 00:21:37.218 }, 00:21:37.218 { 00:21:37.218 "method": "bdev_raid_set_options", 00:21:37.218 "params": { 00:21:37.218 "process_window_size_kb": 1024, 00:21:37.218 "process_max_bandwidth_mb_sec": 0 00:21:37.218 } 00:21:37.218 }, 00:21:37.218 { 00:21:37.219 "method": "bdev_iscsi_set_options", 00:21:37.219 "params": { 00:21:37.219 "timeout_sec": 30 00:21:37.219 } 00:21:37.219 }, 00:21:37.219 { 00:21:37.219 "method": "bdev_nvme_set_options", 00:21:37.219 "params": { 00:21:37.219 "action_on_timeout": "none", 00:21:37.219 "timeout_us": 0, 00:21:37.219 "timeout_admin_us": 0, 00:21:37.219 "keep_alive_timeout_ms": 10000, 00:21:37.219 "arbitration_burst": 0, 00:21:37.219 "low_priority_weight": 0, 00:21:37.219 "medium_priority_weight": 0, 00:21:37.219 "high_priority_weight": 0, 00:21:37.219 "nvme_adminq_poll_period_us": 10000, 00:21:37.219 "nvme_ioq_poll_period_us": 0, 00:21:37.219 "io_queue_requests": 0, 00:21:37.219 "delay_cmd_submit": true, 00:21:37.219 "transport_retry_count": 4, 00:21:37.219 "bdev_retry_count": 3, 00:21:37.219 "transport_ack_timeout": 0, 00:21:37.219 "ctrlr_loss_timeout_sec": 0, 00:21:37.219 "reconnect_delay_sec": 0, 00:21:37.219 "fast_io_fail_timeout_sec": 0, 00:21:37.219 "disable_auto_failback": false, 00:21:37.219 "generate_uuids": false, 00:21:37.219 "transport_tos": 0, 00:21:37.219 "nvme_error_stat": false, 00:21:37.219 "rdma_srq_size": 0, 00:21:37.219 "io_path_stat": false, 00:21:37.219 "allow_accel_sequence": false, 00:21:37.219 "rdma_max_cq_size": 0, 00:21:37.219 "rdma_cm_event_timeout_ms": 0, 00:21:37.219 "dhchap_digests": [ 00:21:37.219 "sha256", 00:21:37.219 "sha384", 00:21:37.219 "sha512" 00:21:37.219 ], 00:21:37.219 "dhchap_dhgroups": [ 00:21:37.219 "null", 00:21:37.219 "ffdhe2048", 00:21:37.219 "ffdhe3072", 00:21:37.219 "ffdhe4096", 00:21:37.219 "ffdhe6144", 00:21:37.219 "ffdhe8192" 00:21:37.219 ] 00:21:37.219 } 00:21:37.219 }, 00:21:37.219 { 00:21:37.219 "method": "bdev_nvme_set_hotplug", 00:21:37.219 "params": { 00:21:37.219 "period_us": 100000, 00:21:37.219 "enable": false 00:21:37.219 } 00:21:37.219 }, 00:21:37.219 { 00:21:37.219 "method": "bdev_malloc_create", 00:21:37.219 "params": { 00:21:37.219 "name": "malloc0", 00:21:37.219 "num_blocks": 8192, 00:21:37.219 "block_size": 4096, 00:21:37.219 "physical_block_size": 4096, 00:21:37.219 "uuid": "9e88bcbb-aa79-459f-951c-793c8de8af65", 00:21:37.219 "optimal_io_boundary": 0, 00:21:37.219 "md_size": 0, 00:21:37.219 "dif_type": 0, 00:21:37.219 "dif_is_head_of_md": false, 00:21:37.219 "dif_pi_format": 0 00:21:37.219 } 00:21:37.219 }, 00:21:37.219 { 00:21:37.219 "method": "bdev_wait_for_examine" 00:21:37.219 } 00:21:37.219 ] 00:21:37.219 }, 00:21:37.219 { 00:21:37.219 "subsystem": "nbd", 00:21:37.219 "config": [] 00:21:37.219 }, 00:21:37.219 { 00:21:37.219 "subsystem": "scheduler", 00:21:37.219 "config": [ 00:21:37.219 { 00:21:37.219 "method": "framework_set_scheduler", 00:21:37.219 "params": { 00:21:37.219 "name": "static" 00:21:37.219 } 00:21:37.219 } 00:21:37.219 ] 00:21:37.219 }, 00:21:37.219 { 00:21:37.219 "subsystem": "nvmf", 00:21:37.219 "config": [ 00:21:37.219 { 00:21:37.219 "method": "nvmf_set_config", 00:21:37.219 "params": { 00:21:37.219 "discovery_filter": "match_any", 00:21:37.219 "admin_cmd_passthru": { 00:21:37.219 "identify_ctrlr": false 00:21:37.219 }, 00:21:37.219 "dhchap_digests": [ 00:21:37.219 "sha256", 00:21:37.219 "sha384", 00:21:37.219 "sha512" 00:21:37.219 ], 00:21:37.219 "dhchap_dhgroups": [ 00:21:37.219 "null", 00:21:37.219 "ffdhe2048", 00:21:37.219 "ffdhe3072", 00:21:37.219 "ffdhe4096", 00:21:37.219 "ffdhe6144", 00:21:37.219 "ffdhe8192" 00:21:37.219 ] 00:21:37.219 } 00:21:37.219 }, 00:21:37.219 { 00:21:37.219 "method": "nvmf_set_max_subsystems", 00:21:37.219 "params": { 00:21:37.219 "max_subsystems": 1024 00:21:37.219 } 00:21:37.219 }, 00:21:37.219 { 00:21:37.219 "method": "nvmf_set_crdt", 00:21:37.219 "params": { 00:21:37.219 "crdt1": 0, 00:21:37.219 "crdt2": 0, 00:21:37.219 "crdt3": 0 00:21:37.219 } 00:21:37.219 }, 00:21:37.219 { 00:21:37.219 "method": "nvmf_create_transport", 00:21:37.219 "params": { 00:21:37.219 "trtype": "TCP", 00:21:37.219 "max_queue_depth": 128, 00:21:37.219 "max_io_qpairs_per_ctrlr": 127, 00:21:37.219 "in_capsule_data_size": 4096, 00:21:37.219 "max_io_size": 131072, 00:21:37.219 "io_unit_size": 131072, 00:21:37.219 "max_aq_depth": 128, 00:21:37.219 "num_shared_buffers": 511, 00:21:37.219 "buf_cache_size": 4294967295, 00:21:37.219 "dif_insert_or_strip": false, 00:21:37.219 "zcopy": false, 00:21:37.219 "c2h_success": false, 00:21:37.219 "sock_priority": 0, 00:21:37.219 "abort_timeout_sec": 1, 00:21:37.219 "ack_timeout": 0, 00:21:37.219 "data_wr_pool_size": 0 00:21:37.219 } 00:21:37.219 }, 00:21:37.219 { 00:21:37.219 "method": "nvmf_create_subsystem", 00:21:37.219 "params": { 00:21:37.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.219 "allow_any_host": false, 00:21:37.219 "serial_number": "SPDK00000000000001", 00:21:37.219 "model_number": "SPDK bdev Controller", 00:21:37.219 "max_namespaces": 10, 00:21:37.219 "min_cntlid": 1, 00:21:37.219 "max_cntlid": 65519, 00:21:37.219 "ana_reporting": false 00:21:37.219 } 00:21:37.219 }, 00:21:37.219 { 00:21:37.219 "method": "nvmf_subsystem_add_host", 00:21:37.219 "params": { 00:21:37.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.219 "host": "nqn.2016-06.io.spdk:host1", 00:21:37.219 "psk": "key0" 00:21:37.219 } 00:21:37.219 }, 00:21:37.219 { 00:21:37.219 "method": "nvmf_subsystem_add_ns", 00:21:37.219 "params": { 00:21:37.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.219 "namespace": { 00:21:37.219 "nsid": 1, 00:21:37.219 "bdev_name": "malloc0", 00:21:37.219 "nguid": "9E88BCBBAA79459F951C793C8DE8AF65", 00:21:37.219 "uuid": "9e88bcbb-aa79-459f-951c-793c8de8af65", 00:21:37.219 "no_auto_visible": false 00:21:37.219 } 00:21:37.219 } 00:21:37.219 }, 00:21:37.219 { 00:21:37.219 "method": "nvmf_subsystem_add_listener", 00:21:37.219 "params": { 00:21:37.219 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:37.219 "listen_address": { 00:21:37.219 "trtype": "TCP", 00:21:37.219 "adrfam": "IPv4", 00:21:37.219 "traddr": "10.0.0.2", 00:21:37.219 "trsvcid": "4420" 00:21:37.219 }, 00:21:37.219 "secure_channel": true 00:21:37.219 } 00:21:37.219 } 00:21:37.219 ] 00:21:37.219 } 00:21:37.219 ] 00:21:37.219 }' 00:21:37.219 08:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.219 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:21:37.219 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=1720249 00:21:37.219 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 1720249 00:21:37.219 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1720249 ']' 00:21:37.219 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:37.219 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:37.219 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:37.219 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:37.219 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:37.219 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:37.219 [2024-11-20 08:18:51.041708] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:21:37.219 [2024-11-20 08:18:51.041759] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:37.219 [2024-11-20 08:18:51.117551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:37.219 [2024-11-20 08:18:51.154655] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:37.220 [2024-11-20 08:18:51.154692] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:37.220 [2024-11-20 08:18:51.154699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:37.220 [2024-11-20 08:18:51.154705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:37.220 [2024-11-20 08:18:51.154712] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:37.220 [2024-11-20 08:18:51.155294] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:37.479 [2024-11-20 08:18:51.366888] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:37.479 [2024-11-20 08:18:51.398911] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:37.479 [2024-11-20 08:18:51.399116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:38.048 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.048 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:38.048 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:38.048 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:38.048 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.048 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:38.048 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=1720488 00:21:38.048 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 1720488 /var/tmp/bdevperf.sock 00:21:38.048 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1720488 ']' 00:21:38.048 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:38.048 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:21:38.048 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:38.048 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:38.048 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:38.048 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:21:38.048 "subsystems": [ 00:21:38.048 { 00:21:38.048 "subsystem": "keyring", 00:21:38.048 "config": [ 00:21:38.048 { 00:21:38.048 "method": "keyring_file_add_key", 00:21:38.048 "params": { 00:21:38.048 "name": "key0", 00:21:38.048 "path": "/tmp/tmp.5dhRnXSWg5" 00:21:38.048 } 00:21:38.048 } 00:21:38.048 ] 00:21:38.048 }, 00:21:38.048 { 00:21:38.048 "subsystem": "iobuf", 00:21:38.048 "config": [ 00:21:38.048 { 00:21:38.048 "method": "iobuf_set_options", 00:21:38.048 "params": { 00:21:38.048 "small_pool_count": 8192, 00:21:38.048 "large_pool_count": 1024, 00:21:38.048 "small_bufsize": 8192, 00:21:38.048 "large_bufsize": 135168, 00:21:38.048 "enable_numa": false 00:21:38.048 } 00:21:38.048 } 00:21:38.048 ] 00:21:38.048 }, 00:21:38.048 { 00:21:38.048 "subsystem": "sock", 00:21:38.048 "config": [ 00:21:38.048 { 00:21:38.048 "method": "sock_set_default_impl", 00:21:38.048 "params": { 00:21:38.048 "impl_name": "posix" 00:21:38.048 } 00:21:38.048 }, 00:21:38.048 { 00:21:38.048 "method": "sock_impl_set_options", 00:21:38.048 "params": { 00:21:38.048 "impl_name": "ssl", 00:21:38.048 "recv_buf_size": 4096, 00:21:38.048 "send_buf_size": 4096, 00:21:38.048 "enable_recv_pipe": true, 00:21:38.048 "enable_quickack": false, 00:21:38.048 "enable_placement_id": 0, 00:21:38.048 "enable_zerocopy_send_server": true, 00:21:38.048 "enable_zerocopy_send_client": false, 00:21:38.048 "zerocopy_threshold": 0, 00:21:38.048 "tls_version": 0, 00:21:38.048 "enable_ktls": false 00:21:38.048 } 00:21:38.048 }, 00:21:38.048 { 00:21:38.048 "method": "sock_impl_set_options", 00:21:38.048 "params": { 00:21:38.048 "impl_name": "posix", 00:21:38.048 "recv_buf_size": 2097152, 00:21:38.048 "send_buf_size": 2097152, 00:21:38.048 "enable_recv_pipe": true, 00:21:38.048 "enable_quickack": false, 00:21:38.049 "enable_placement_id": 0, 00:21:38.049 "enable_zerocopy_send_server": true, 00:21:38.049 "enable_zerocopy_send_client": false, 00:21:38.049 "zerocopy_threshold": 0, 00:21:38.049 "tls_version": 0, 00:21:38.049 "enable_ktls": false 00:21:38.049 } 00:21:38.049 } 00:21:38.049 ] 00:21:38.049 }, 00:21:38.049 { 00:21:38.049 "subsystem": "vmd", 00:21:38.049 "config": [] 00:21:38.049 }, 00:21:38.049 { 00:21:38.049 "subsystem": "accel", 00:21:38.049 "config": [ 00:21:38.049 { 00:21:38.049 "method": "accel_set_options", 00:21:38.049 "params": { 00:21:38.049 "small_cache_size": 128, 00:21:38.049 "large_cache_size": 16, 00:21:38.049 "task_count": 2048, 00:21:38.049 "sequence_count": 2048, 00:21:38.049 "buf_count": 2048 00:21:38.049 } 00:21:38.049 } 00:21:38.049 ] 00:21:38.049 }, 00:21:38.049 { 00:21:38.049 "subsystem": "bdev", 00:21:38.049 "config": [ 00:21:38.049 { 00:21:38.049 "method": "bdev_set_options", 00:21:38.049 "params": { 00:21:38.049 "bdev_io_pool_size": 65535, 00:21:38.049 "bdev_io_cache_size": 256, 00:21:38.049 "bdev_auto_examine": true, 00:21:38.049 "iobuf_small_cache_size": 128, 00:21:38.049 "iobuf_large_cache_size": 16 00:21:38.049 } 00:21:38.049 }, 00:21:38.049 { 00:21:38.049 "method": "bdev_raid_set_options", 00:21:38.049 "params": { 00:21:38.049 "process_window_size_kb": 1024, 00:21:38.049 "process_max_bandwidth_mb_sec": 0 00:21:38.049 } 00:21:38.049 }, 00:21:38.049 { 00:21:38.049 "method": "bdev_iscsi_set_options", 00:21:38.049 "params": { 00:21:38.049 "timeout_sec": 30 00:21:38.049 } 00:21:38.049 }, 00:21:38.049 { 00:21:38.049 "method": "bdev_nvme_set_options", 00:21:38.049 "params": { 00:21:38.049 "action_on_timeout": "none", 00:21:38.049 "timeout_us": 0, 00:21:38.049 "timeout_admin_us": 0, 00:21:38.049 "keep_alive_timeout_ms": 10000, 00:21:38.049 "arbitration_burst": 0, 00:21:38.049 "low_priority_weight": 0, 00:21:38.049 "medium_priority_weight": 0, 00:21:38.049 "high_priority_weight": 0, 00:21:38.049 "nvme_adminq_poll_period_us": 10000, 00:21:38.049 "nvme_ioq_poll_period_us": 0, 00:21:38.049 "io_queue_requests": 512, 00:21:38.049 "delay_cmd_submit": true, 00:21:38.049 "transport_retry_count": 4, 00:21:38.049 "bdev_retry_count": 3, 00:21:38.049 "transport_ack_timeout": 0, 00:21:38.049 "ctrlr_loss_timeout_sec": 0, 00:21:38.049 "reconnect_delay_sec": 0, 00:21:38.049 "fast_io_fail_timeout_sec": 0, 00:21:38.049 "disable_auto_failback": false, 00:21:38.049 "generate_uuids": false, 00:21:38.049 "transport_tos": 0, 00:21:38.049 "nvme_error_stat": false, 00:21:38.049 "rdma_srq_size": 0, 00:21:38.049 "io_path_stat": false, 00:21:38.049 "allow_accel_sequence": false, 00:21:38.049 "rdma_max_cq_size": 0, 00:21:38.049 "rdma_cm_event_timeout_ms": 0, 00:21:38.049 "dhchap_digests": [ 00:21:38.049 "sha256", 00:21:38.049 "sha384", 00:21:38.049 "sha512" 00:21:38.049 ], 00:21:38.049 "dhchap_dhgroups": [ 00:21:38.049 "null", 00:21:38.049 "ffdhe2048", 00:21:38.049 "ffdhe3072", 00:21:38.049 "ffdhe4096", 00:21:38.049 "ffdhe6144", 00:21:38.049 "ffdhe8192" 00:21:38.049 ] 00:21:38.049 } 00:21:38.049 }, 00:21:38.049 { 00:21:38.049 "method": "bdev_nvme_attach_controller", 00:21:38.049 "params": { 00:21:38.049 "name": "TLSTEST", 00:21:38.049 "trtype": "TCP", 00:21:38.049 "adrfam": "IPv4", 00:21:38.049 "traddr": "10.0.0.2", 00:21:38.049 "trsvcid": "4420", 00:21:38.049 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:38.049 "prchk_reftag": false, 00:21:38.049 "prchk_guard": false, 00:21:38.049 "ctrlr_loss_timeout_sec": 0, 00:21:38.049 "reconnect_delay_sec": 0, 00:21:38.049 "fast_io_fail_timeout_sec": 0, 00:21:38.049 "psk": "key0", 00:21:38.049 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:38.049 "hdgst": false, 00:21:38.049 "ddgst": false, 00:21:38.049 "multipath": "multipath" 00:21:38.049 } 00:21:38.049 }, 00:21:38.049 { 00:21:38.049 "method": "bdev_nvme_set_hotplug", 00:21:38.049 "params": { 00:21:38.049 "period_us": 100000, 00:21:38.049 "enable": false 00:21:38.049 } 00:21:38.049 }, 00:21:38.049 { 00:21:38.049 "method": "bdev_wait_for_examine" 00:21:38.049 } 00:21:38.049 ] 00:21:38.049 }, 00:21:38.049 { 00:21:38.049 "subsystem": "nbd", 00:21:38.049 "config": [] 00:21:38.049 } 00:21:38.049 ] 00:21:38.049 }' 00:21:38.049 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:38.049 08:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:38.049 [2024-11-20 08:18:51.966056] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:21:38.049 [2024-11-20 08:18:51.966105] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1720488 ] 00:21:38.049 [2024-11-20 08:18:52.038959] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:38.308 [2024-11-20 08:18:52.079775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:38.308 [2024-11-20 08:18:52.230194] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:38.876 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:38.876 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:38.876 08:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:21:38.876 Running I/O for 10 seconds... 00:21:41.191 5509.00 IOPS, 21.52 MiB/s [2024-11-20T07:18:56.156Z] 5584.00 IOPS, 21.81 MiB/s [2024-11-20T07:18:57.092Z] 5557.33 IOPS, 21.71 MiB/s [2024-11-20T07:18:58.036Z] 5571.50 IOPS, 21.76 MiB/s [2024-11-20T07:18:58.973Z] 5524.60 IOPS, 21.58 MiB/s [2024-11-20T07:18:59.910Z] 5508.83 IOPS, 21.52 MiB/s [2024-11-20T07:19:01.287Z] 5511.57 IOPS, 21.53 MiB/s [2024-11-20T07:19:02.225Z] 5505.25 IOPS, 21.50 MiB/s [2024-11-20T07:19:03.160Z] 5522.56 IOPS, 21.57 MiB/s [2024-11-20T07:19:03.160Z] 5532.40 IOPS, 21.61 MiB/s 00:21:49.132 Latency(us) 00:21:49.132 [2024-11-20T07:19:03.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.132 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:21:49.132 Verification LBA range: start 0x0 length 0x2000 00:21:49.132 TLSTESTn1 : 10.01 5538.10 21.63 0.00 0.00 23079.85 5180.46 24466.77 00:21:49.132 [2024-11-20T07:19:03.160Z] =================================================================================================================== 00:21:49.132 [2024-11-20T07:19:03.160Z] Total : 5538.10 21.63 0.00 0.00 23079.85 5180.46 24466.77 00:21:49.132 { 00:21:49.132 "results": [ 00:21:49.132 { 00:21:49.132 "job": "TLSTESTn1", 00:21:49.132 "core_mask": "0x4", 00:21:49.132 "workload": "verify", 00:21:49.132 "status": "finished", 00:21:49.132 "verify_range": { 00:21:49.132 "start": 0, 00:21:49.132 "length": 8192 00:21:49.132 }, 00:21:49.132 "queue_depth": 128, 00:21:49.133 "io_size": 4096, 00:21:49.133 "runtime": 10.012457, 00:21:49.133 "iops": 5538.101187350917, 00:21:49.133 "mibps": 21.63320776308952, 00:21:49.133 "io_failed": 0, 00:21:49.133 "io_timeout": 0, 00:21:49.133 "avg_latency_us": 23079.847929580486, 00:21:49.133 "min_latency_us": 5180.464761904762, 00:21:49.133 "max_latency_us": 24466.773333333334 00:21:49.133 } 00:21:49.133 ], 00:21:49.133 "core_count": 1 00:21:49.133 } 00:21:49.133 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:49.133 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 1720488 00:21:49.133 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1720488 ']' 00:21:49.133 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1720488 00:21:49.133 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:49.133 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.133 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1720488 00:21:49.133 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:21:49.133 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:21:49.133 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1720488' 00:21:49.133 killing process with pid 1720488 00:21:49.133 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1720488 00:21:49.133 Received shutdown signal, test time was about 10.000000 seconds 00:21:49.133 00:21:49.133 Latency(us) 00:21:49.133 [2024-11-20T07:19:03.161Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:49.133 [2024-11-20T07:19:03.161Z] =================================================================================================================== 00:21:49.133 [2024-11-20T07:19:03.161Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:49.133 08:19:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1720488 00:21:49.133 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 1720249 00:21:49.133 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1720249 ']' 00:21:49.133 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1720249 00:21:49.133 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:49.133 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:49.133 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1720249 00:21:49.392 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:49.392 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:49.392 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1720249' 00:21:49.392 killing process with pid 1720249 00:21:49.392 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1720249 00:21:49.392 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1720249 00:21:49.392 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:21:49.392 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:49.392 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.392 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.392 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=1722334 00:21:49.392 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:49.392 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 1722334 00:21:49.392 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1722334 ']' 00:21:49.392 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:49.392 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:49.392 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:49.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:49.392 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:49.392 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.651 [2024-11-20 08:19:03.420220] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:21:49.651 [2024-11-20 08:19:03.420267] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:49.651 [2024-11-20 08:19:03.483622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:49.651 [2024-11-20 08:19:03.524422] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:49.651 [2024-11-20 08:19:03.524457] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:49.651 [2024-11-20 08:19:03.524464] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:49.651 [2024-11-20 08:19:03.524470] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:49.651 [2024-11-20 08:19:03.524475] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:49.651 [2024-11-20 08:19:03.525036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:49.651 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.651 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:49.651 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:49.651 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.651 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:49.651 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.651 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.5dhRnXSWg5 00:21:49.651 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.5dhRnXSWg5 00:21:49.651 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:49.910 [2024-11-20 08:19:03.827749] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.911 08:19:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:21:50.169 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:21:50.428 [2024-11-20 08:19:04.224770] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:50.428 [2024-11-20 08:19:04.224970] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:50.428 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:21:50.428 malloc0 00:21:50.687 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:21:50.687 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.5dhRnXSWg5 00:21:50.945 08:19:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:21:51.205 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:51.205 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=1722592 00:21:51.205 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:51.205 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 1722592 /var/tmp/bdevperf.sock 00:21:51.205 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1722592 ']' 00:21:51.205 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:51.205 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:51.205 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:51.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:51.205 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:51.205 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:51.205 [2024-11-20 08:19:05.082801] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:21:51.205 [2024-11-20 08:19:05.082852] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1722592 ] 00:21:51.205 [2024-11-20 08:19:05.159325] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:51.205 [2024-11-20 08:19:05.199801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:51.463 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:51.463 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:51.463 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5dhRnXSWg5 00:21:51.722 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:51.722 [2024-11-20 08:19:05.650438] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:51.722 nvme0n1 00:21:51.722 08:19:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:51.981 Running I/O for 1 seconds... 00:21:52.917 5253.00 IOPS, 20.52 MiB/s 00:21:52.917 Latency(us) 00:21:52.917 [2024-11-20T07:19:06.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.917 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:52.917 Verification LBA range: start 0x0 length 0x2000 00:21:52.917 nvme0n1 : 1.02 5293.36 20.68 0.00 0.00 24001.42 4525.10 50431.51 00:21:52.917 [2024-11-20T07:19:06.945Z] =================================================================================================================== 00:21:52.917 [2024-11-20T07:19:06.945Z] Total : 5293.36 20.68 0.00 0.00 24001.42 4525.10 50431.51 00:21:52.917 { 00:21:52.917 "results": [ 00:21:52.917 { 00:21:52.917 "job": "nvme0n1", 00:21:52.917 "core_mask": "0x2", 00:21:52.917 "workload": "verify", 00:21:52.917 "status": "finished", 00:21:52.917 "verify_range": { 00:21:52.917 "start": 0, 00:21:52.917 "length": 8192 00:21:52.917 }, 00:21:52.917 "queue_depth": 128, 00:21:52.917 "io_size": 4096, 00:21:52.917 "runtime": 1.016556, 00:21:52.917 "iops": 5293.363080833717, 00:21:52.917 "mibps": 20.677199534506705, 00:21:52.917 "io_failed": 0, 00:21:52.917 "io_timeout": 0, 00:21:52.917 "avg_latency_us": 24001.42255449067, 00:21:52.917 "min_latency_us": 4525.104761904762, 00:21:52.917 "max_latency_us": 50431.51238095238 00:21:52.917 } 00:21:52.917 ], 00:21:52.917 "core_count": 1 00:21:52.917 } 00:21:52.917 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 1722592 00:21:52.917 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1722592 ']' 00:21:52.917 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1722592 00:21:52.917 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:52.917 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:52.917 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1722592 00:21:52.917 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:52.917 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:52.917 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1722592' 00:21:52.917 killing process with pid 1722592 00:21:52.917 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1722592 00:21:52.917 Received shutdown signal, test time was about 1.000000 seconds 00:21:52.917 00:21:52.917 Latency(us) 00:21:52.917 [2024-11-20T07:19:06.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:52.917 [2024-11-20T07:19:06.946Z] =================================================================================================================== 00:21:52.918 [2024-11-20T07:19:06.946Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:52.918 08:19:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1722592 00:21:53.177 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 1722334 00:21:53.177 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1722334 ']' 00:21:53.177 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1722334 00:21:53.177 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:53.177 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:53.177 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1722334 00:21:53.177 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:53.177 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:53.177 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1722334' 00:21:53.177 killing process with pid 1722334 00:21:53.177 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1722334 00:21:53.177 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1722334 00:21:53.436 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:21:53.436 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:53.436 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:53.436 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.436 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=1722993 00:21:53.436 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 1722993 00:21:53.436 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:21:53.436 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1722993 ']' 00:21:53.436 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:53.436 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.436 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:53.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:53.436 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.436 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.436 [2024-11-20 08:19:07.372140] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:21:53.436 [2024-11-20 08:19:07.372196] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:53.436 [2024-11-20 08:19:07.451617] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.696 [2024-11-20 08:19:07.489435] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:53.696 [2024-11-20 08:19:07.489475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:53.696 [2024-11-20 08:19:07.489482] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:53.696 [2024-11-20 08:19:07.489488] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:53.696 [2024-11-20 08:19:07.489494] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:53.696 [2024-11-20 08:19:07.490046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:53.696 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.696 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:53.696 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:53.696 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:53.696 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.696 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:53.696 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:21:53.696 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.696 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.696 [2024-11-20 08:19:07.631580] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:53.696 malloc0 00:21:53.696 [2024-11-20 08:19:07.659725] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:53.696 [2024-11-20 08:19:07.659926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:53.696 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.696 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=1723082 00:21:53.696 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 1723082 /var/tmp/bdevperf.sock 00:21:53.696 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:21:53.696 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1723082 ']' 00:21:53.696 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:53.696 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:53.696 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:53.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:53.696 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:53.696 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:53.955 [2024-11-20 08:19:07.735153] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:21:53.955 [2024-11-20 08:19:07.735193] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1723082 ] 00:21:53.955 [2024-11-20 08:19:07.808271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:53.955 [2024-11-20 08:19:07.848353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:53.955 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:53.955 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:53.955 08:19:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.5dhRnXSWg5 00:21:54.214 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:21:54.473 [2024-11-20 08:19:08.315706] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:54.473 nvme0n1 00:21:54.473 08:19:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:54.473 Running I/O for 1 seconds... 00:21:55.849 5456.00 IOPS, 21.31 MiB/s 00:21:55.849 Latency(us) 00:21:55.849 [2024-11-20T07:19:09.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:55.849 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:55.849 Verification LBA range: start 0x0 length 0x2000 00:21:55.849 nvme0n1 : 1.01 5505.11 21.50 0.00 0.00 23088.05 5960.66 31582.11 00:21:55.849 [2024-11-20T07:19:09.877Z] =================================================================================================================== 00:21:55.849 [2024-11-20T07:19:09.877Z] Total : 5505.11 21.50 0.00 0.00 23088.05 5960.66 31582.11 00:21:55.849 { 00:21:55.849 "results": [ 00:21:55.849 { 00:21:55.849 "job": "nvme0n1", 00:21:55.849 "core_mask": "0x2", 00:21:55.849 "workload": "verify", 00:21:55.849 "status": "finished", 00:21:55.849 "verify_range": { 00:21:55.849 "start": 0, 00:21:55.849 "length": 8192 00:21:55.849 }, 00:21:55.849 "queue_depth": 128, 00:21:55.849 "io_size": 4096, 00:21:55.849 "runtime": 1.014331, 00:21:55.849 "iops": 5505.106321309317, 00:21:55.849 "mibps": 21.504321567614518, 00:21:55.849 "io_failed": 0, 00:21:55.849 "io_timeout": 0, 00:21:55.849 "avg_latency_us": 23088.049338245328, 00:21:55.849 "min_latency_us": 5960.655238095238, 00:21:55.849 "max_latency_us": 31582.110476190475 00:21:55.849 } 00:21:55.849 ], 00:21:55.849 "core_count": 1 00:21:55.849 } 00:21:55.849 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:21:55.849 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.849 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:55.849 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.849 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:21:55.849 "subsystems": [ 00:21:55.849 { 00:21:55.849 "subsystem": "keyring", 00:21:55.849 "config": [ 00:21:55.849 { 00:21:55.849 "method": "keyring_file_add_key", 00:21:55.849 "params": { 00:21:55.849 "name": "key0", 00:21:55.849 "path": "/tmp/tmp.5dhRnXSWg5" 00:21:55.849 } 00:21:55.849 } 00:21:55.849 ] 00:21:55.849 }, 00:21:55.849 { 00:21:55.849 "subsystem": "iobuf", 00:21:55.849 "config": [ 00:21:55.849 { 00:21:55.849 "method": "iobuf_set_options", 00:21:55.849 "params": { 00:21:55.849 "small_pool_count": 8192, 00:21:55.849 "large_pool_count": 1024, 00:21:55.849 "small_bufsize": 8192, 00:21:55.849 "large_bufsize": 135168, 00:21:55.849 "enable_numa": false 00:21:55.849 } 00:21:55.849 } 00:21:55.849 ] 00:21:55.849 }, 00:21:55.849 { 00:21:55.849 "subsystem": "sock", 00:21:55.849 "config": [ 00:21:55.849 { 00:21:55.849 "method": "sock_set_default_impl", 00:21:55.849 "params": { 00:21:55.849 "impl_name": "posix" 00:21:55.849 } 00:21:55.849 }, 00:21:55.849 { 00:21:55.849 "method": "sock_impl_set_options", 00:21:55.849 "params": { 00:21:55.849 "impl_name": "ssl", 00:21:55.849 "recv_buf_size": 4096, 00:21:55.849 "send_buf_size": 4096, 00:21:55.850 "enable_recv_pipe": true, 00:21:55.850 "enable_quickack": false, 00:21:55.850 "enable_placement_id": 0, 00:21:55.850 "enable_zerocopy_send_server": true, 00:21:55.850 "enable_zerocopy_send_client": false, 00:21:55.850 "zerocopy_threshold": 0, 00:21:55.850 "tls_version": 0, 00:21:55.850 "enable_ktls": false 00:21:55.850 } 00:21:55.850 }, 00:21:55.850 { 00:21:55.850 "method": "sock_impl_set_options", 00:21:55.850 "params": { 00:21:55.850 "impl_name": "posix", 00:21:55.850 "recv_buf_size": 2097152, 00:21:55.850 "send_buf_size": 2097152, 00:21:55.850 "enable_recv_pipe": true, 00:21:55.850 "enable_quickack": false, 00:21:55.850 "enable_placement_id": 0, 00:21:55.850 "enable_zerocopy_send_server": true, 00:21:55.850 "enable_zerocopy_send_client": false, 00:21:55.850 "zerocopy_threshold": 0, 00:21:55.850 "tls_version": 0, 00:21:55.850 "enable_ktls": false 00:21:55.850 } 00:21:55.850 } 00:21:55.850 ] 00:21:55.850 }, 00:21:55.850 { 00:21:55.850 "subsystem": "vmd", 00:21:55.850 "config": [] 00:21:55.850 }, 00:21:55.850 { 00:21:55.850 "subsystem": "accel", 00:21:55.850 "config": [ 00:21:55.850 { 00:21:55.850 "method": "accel_set_options", 00:21:55.850 "params": { 00:21:55.850 "small_cache_size": 128, 00:21:55.850 "large_cache_size": 16, 00:21:55.850 "task_count": 2048, 00:21:55.850 "sequence_count": 2048, 00:21:55.850 "buf_count": 2048 00:21:55.850 } 00:21:55.850 } 00:21:55.850 ] 00:21:55.850 }, 00:21:55.850 { 00:21:55.850 "subsystem": "bdev", 00:21:55.850 "config": [ 00:21:55.850 { 00:21:55.850 "method": "bdev_set_options", 00:21:55.850 "params": { 00:21:55.850 "bdev_io_pool_size": 65535, 00:21:55.850 "bdev_io_cache_size": 256, 00:21:55.850 "bdev_auto_examine": true, 00:21:55.850 "iobuf_small_cache_size": 128, 00:21:55.850 "iobuf_large_cache_size": 16 00:21:55.850 } 00:21:55.850 }, 00:21:55.850 { 00:21:55.850 "method": "bdev_raid_set_options", 00:21:55.850 "params": { 00:21:55.850 "process_window_size_kb": 1024, 00:21:55.850 "process_max_bandwidth_mb_sec": 0 00:21:55.850 } 00:21:55.850 }, 00:21:55.850 { 00:21:55.850 "method": "bdev_iscsi_set_options", 00:21:55.850 "params": { 00:21:55.850 "timeout_sec": 30 00:21:55.850 } 00:21:55.850 }, 00:21:55.850 { 00:21:55.850 "method": "bdev_nvme_set_options", 00:21:55.850 "params": { 00:21:55.850 "action_on_timeout": "none", 00:21:55.850 "timeout_us": 0, 00:21:55.850 "timeout_admin_us": 0, 00:21:55.850 "keep_alive_timeout_ms": 10000, 00:21:55.850 "arbitration_burst": 0, 00:21:55.850 "low_priority_weight": 0, 00:21:55.850 "medium_priority_weight": 0, 00:21:55.850 "high_priority_weight": 0, 00:21:55.850 "nvme_adminq_poll_period_us": 10000, 00:21:55.850 "nvme_ioq_poll_period_us": 0, 00:21:55.850 "io_queue_requests": 0, 00:21:55.850 "delay_cmd_submit": true, 00:21:55.850 "transport_retry_count": 4, 00:21:55.850 "bdev_retry_count": 3, 00:21:55.850 "transport_ack_timeout": 0, 00:21:55.850 "ctrlr_loss_timeout_sec": 0, 00:21:55.850 "reconnect_delay_sec": 0, 00:21:55.850 "fast_io_fail_timeout_sec": 0, 00:21:55.850 "disable_auto_failback": false, 00:21:55.850 "generate_uuids": false, 00:21:55.850 "transport_tos": 0, 00:21:55.850 "nvme_error_stat": false, 00:21:55.850 "rdma_srq_size": 0, 00:21:55.850 "io_path_stat": false, 00:21:55.850 "allow_accel_sequence": false, 00:21:55.850 "rdma_max_cq_size": 0, 00:21:55.850 "rdma_cm_event_timeout_ms": 0, 00:21:55.850 "dhchap_digests": [ 00:21:55.850 "sha256", 00:21:55.850 "sha384", 00:21:55.850 "sha512" 00:21:55.850 ], 00:21:55.850 "dhchap_dhgroups": [ 00:21:55.850 "null", 00:21:55.850 "ffdhe2048", 00:21:55.850 "ffdhe3072", 00:21:55.850 "ffdhe4096", 00:21:55.850 "ffdhe6144", 00:21:55.850 "ffdhe8192" 00:21:55.850 ] 00:21:55.850 } 00:21:55.850 }, 00:21:55.850 { 00:21:55.850 "method": "bdev_nvme_set_hotplug", 00:21:55.850 "params": { 00:21:55.850 "period_us": 100000, 00:21:55.850 "enable": false 00:21:55.850 } 00:21:55.850 }, 00:21:55.850 { 00:21:55.850 "method": "bdev_malloc_create", 00:21:55.850 "params": { 00:21:55.850 "name": "malloc0", 00:21:55.850 "num_blocks": 8192, 00:21:55.850 "block_size": 4096, 00:21:55.850 "physical_block_size": 4096, 00:21:55.850 "uuid": "68f4abef-38a4-4b7e-8e89-3eacbe2e1fe4", 00:21:55.850 "optimal_io_boundary": 0, 00:21:55.850 "md_size": 0, 00:21:55.850 "dif_type": 0, 00:21:55.850 "dif_is_head_of_md": false, 00:21:55.850 "dif_pi_format": 0 00:21:55.850 } 00:21:55.850 }, 00:21:55.850 { 00:21:55.850 "method": "bdev_wait_for_examine" 00:21:55.850 } 00:21:55.850 ] 00:21:55.850 }, 00:21:55.850 { 00:21:55.850 "subsystem": "nbd", 00:21:55.850 "config": [] 00:21:55.850 }, 00:21:55.850 { 00:21:55.850 "subsystem": "scheduler", 00:21:55.850 "config": [ 00:21:55.850 { 00:21:55.850 "method": "framework_set_scheduler", 00:21:55.850 "params": { 00:21:55.850 "name": "static" 00:21:55.850 } 00:21:55.850 } 00:21:55.850 ] 00:21:55.850 }, 00:21:55.850 { 00:21:55.850 "subsystem": "nvmf", 00:21:55.850 "config": [ 00:21:55.850 { 00:21:55.850 "method": "nvmf_set_config", 00:21:55.850 "params": { 00:21:55.850 "discovery_filter": "match_any", 00:21:55.850 "admin_cmd_passthru": { 00:21:55.850 "identify_ctrlr": false 00:21:55.850 }, 00:21:55.850 "dhchap_digests": [ 00:21:55.850 "sha256", 00:21:55.850 "sha384", 00:21:55.850 "sha512" 00:21:55.850 ], 00:21:55.850 "dhchap_dhgroups": [ 00:21:55.850 "null", 00:21:55.850 "ffdhe2048", 00:21:55.850 "ffdhe3072", 00:21:55.850 "ffdhe4096", 00:21:55.850 "ffdhe6144", 00:21:55.850 "ffdhe8192" 00:21:55.850 ] 00:21:55.850 } 00:21:55.850 }, 00:21:55.850 { 00:21:55.850 "method": "nvmf_set_max_subsystems", 00:21:55.850 "params": { 00:21:55.850 "max_subsystems": 1024 00:21:55.850 } 00:21:55.850 }, 00:21:55.850 { 00:21:55.850 "method": "nvmf_set_crdt", 00:21:55.850 "params": { 00:21:55.850 "crdt1": 0, 00:21:55.850 "crdt2": 0, 00:21:55.850 "crdt3": 0 00:21:55.850 } 00:21:55.850 }, 00:21:55.850 { 00:21:55.850 "method": "nvmf_create_transport", 00:21:55.850 "params": { 00:21:55.850 "trtype": "TCP", 00:21:55.850 "max_queue_depth": 128, 00:21:55.850 "max_io_qpairs_per_ctrlr": 127, 00:21:55.850 "in_capsule_data_size": 4096, 00:21:55.851 "max_io_size": 131072, 00:21:55.851 "io_unit_size": 131072, 00:21:55.851 "max_aq_depth": 128, 00:21:55.851 "num_shared_buffers": 511, 00:21:55.851 "buf_cache_size": 4294967295, 00:21:55.851 "dif_insert_or_strip": false, 00:21:55.851 "zcopy": false, 00:21:55.851 "c2h_success": false, 00:21:55.851 "sock_priority": 0, 00:21:55.851 "abort_timeout_sec": 1, 00:21:55.851 "ack_timeout": 0, 00:21:55.851 "data_wr_pool_size": 0 00:21:55.851 } 00:21:55.851 }, 00:21:55.851 { 00:21:55.851 "method": "nvmf_create_subsystem", 00:21:55.851 "params": { 00:21:55.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:55.851 "allow_any_host": false, 00:21:55.851 "serial_number": "00000000000000000000", 00:21:55.851 "model_number": "SPDK bdev Controller", 00:21:55.851 "max_namespaces": 32, 00:21:55.851 "min_cntlid": 1, 00:21:55.851 "max_cntlid": 65519, 00:21:55.851 "ana_reporting": false 00:21:55.851 } 00:21:55.851 }, 00:21:55.851 { 00:21:55.851 "method": "nvmf_subsystem_add_host", 00:21:55.851 "params": { 00:21:55.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:55.851 "host": "nqn.2016-06.io.spdk:host1", 00:21:55.851 "psk": "key0" 00:21:55.851 } 00:21:55.851 }, 00:21:55.851 { 00:21:55.851 "method": "nvmf_subsystem_add_ns", 00:21:55.851 "params": { 00:21:55.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:55.851 "namespace": { 00:21:55.851 "nsid": 1, 00:21:55.851 "bdev_name": "malloc0", 00:21:55.851 "nguid": "68F4ABEF38A44B7E8E893EACBE2E1FE4", 00:21:55.851 "uuid": "68f4abef-38a4-4b7e-8e89-3eacbe2e1fe4", 00:21:55.851 "no_auto_visible": false 00:21:55.851 } 00:21:55.851 } 00:21:55.851 }, 00:21:55.851 { 00:21:55.851 "method": "nvmf_subsystem_add_listener", 00:21:55.851 "params": { 00:21:55.851 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:55.851 "listen_address": { 00:21:55.851 "trtype": "TCP", 00:21:55.851 "adrfam": "IPv4", 00:21:55.851 "traddr": "10.0.0.2", 00:21:55.851 "trsvcid": "4420" 00:21:55.851 }, 00:21:55.851 "secure_channel": false, 00:21:55.851 "sock_impl": "ssl" 00:21:55.851 } 00:21:55.851 } 00:21:55.851 ] 00:21:55.851 } 00:21:55.851 ] 00:21:55.851 }' 00:21:55.851 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:21:56.110 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:21:56.110 "subsystems": [ 00:21:56.110 { 00:21:56.110 "subsystem": "keyring", 00:21:56.110 "config": [ 00:21:56.110 { 00:21:56.110 "method": "keyring_file_add_key", 00:21:56.110 "params": { 00:21:56.110 "name": "key0", 00:21:56.110 "path": "/tmp/tmp.5dhRnXSWg5" 00:21:56.110 } 00:21:56.110 } 00:21:56.110 ] 00:21:56.110 }, 00:21:56.110 { 00:21:56.110 "subsystem": "iobuf", 00:21:56.110 "config": [ 00:21:56.110 { 00:21:56.110 "method": "iobuf_set_options", 00:21:56.110 "params": { 00:21:56.110 "small_pool_count": 8192, 00:21:56.110 "large_pool_count": 1024, 00:21:56.110 "small_bufsize": 8192, 00:21:56.110 "large_bufsize": 135168, 00:21:56.110 "enable_numa": false 00:21:56.110 } 00:21:56.110 } 00:21:56.110 ] 00:21:56.110 }, 00:21:56.110 { 00:21:56.110 "subsystem": "sock", 00:21:56.110 "config": [ 00:21:56.110 { 00:21:56.110 "method": "sock_set_default_impl", 00:21:56.110 "params": { 00:21:56.110 "impl_name": "posix" 00:21:56.110 } 00:21:56.110 }, 00:21:56.110 { 00:21:56.110 "method": "sock_impl_set_options", 00:21:56.110 "params": { 00:21:56.110 "impl_name": "ssl", 00:21:56.110 "recv_buf_size": 4096, 00:21:56.110 "send_buf_size": 4096, 00:21:56.110 "enable_recv_pipe": true, 00:21:56.110 "enable_quickack": false, 00:21:56.110 "enable_placement_id": 0, 00:21:56.110 "enable_zerocopy_send_server": true, 00:21:56.110 "enable_zerocopy_send_client": false, 00:21:56.110 "zerocopy_threshold": 0, 00:21:56.110 "tls_version": 0, 00:21:56.110 "enable_ktls": false 00:21:56.110 } 00:21:56.110 }, 00:21:56.110 { 00:21:56.110 "method": "sock_impl_set_options", 00:21:56.111 "params": { 00:21:56.111 "impl_name": "posix", 00:21:56.111 "recv_buf_size": 2097152, 00:21:56.111 "send_buf_size": 2097152, 00:21:56.111 "enable_recv_pipe": true, 00:21:56.111 "enable_quickack": false, 00:21:56.111 "enable_placement_id": 0, 00:21:56.111 "enable_zerocopy_send_server": true, 00:21:56.111 "enable_zerocopy_send_client": false, 00:21:56.111 "zerocopy_threshold": 0, 00:21:56.111 "tls_version": 0, 00:21:56.111 "enable_ktls": false 00:21:56.111 } 00:21:56.111 } 00:21:56.111 ] 00:21:56.111 }, 00:21:56.111 { 00:21:56.111 "subsystem": "vmd", 00:21:56.111 "config": [] 00:21:56.111 }, 00:21:56.111 { 00:21:56.111 "subsystem": "accel", 00:21:56.111 "config": [ 00:21:56.111 { 00:21:56.111 "method": "accel_set_options", 00:21:56.111 "params": { 00:21:56.111 "small_cache_size": 128, 00:21:56.111 "large_cache_size": 16, 00:21:56.111 "task_count": 2048, 00:21:56.111 "sequence_count": 2048, 00:21:56.111 "buf_count": 2048 00:21:56.111 } 00:21:56.111 } 00:21:56.111 ] 00:21:56.111 }, 00:21:56.111 { 00:21:56.111 "subsystem": "bdev", 00:21:56.111 "config": [ 00:21:56.111 { 00:21:56.111 "method": "bdev_set_options", 00:21:56.111 "params": { 00:21:56.111 "bdev_io_pool_size": 65535, 00:21:56.111 "bdev_io_cache_size": 256, 00:21:56.111 "bdev_auto_examine": true, 00:21:56.111 "iobuf_small_cache_size": 128, 00:21:56.111 "iobuf_large_cache_size": 16 00:21:56.111 } 00:21:56.111 }, 00:21:56.111 { 00:21:56.111 "method": "bdev_raid_set_options", 00:21:56.111 "params": { 00:21:56.111 "process_window_size_kb": 1024, 00:21:56.111 "process_max_bandwidth_mb_sec": 0 00:21:56.111 } 00:21:56.111 }, 00:21:56.111 { 00:21:56.111 "method": "bdev_iscsi_set_options", 00:21:56.111 "params": { 00:21:56.111 "timeout_sec": 30 00:21:56.111 } 00:21:56.111 }, 00:21:56.111 { 00:21:56.111 "method": "bdev_nvme_set_options", 00:21:56.111 "params": { 00:21:56.111 "action_on_timeout": "none", 00:21:56.111 "timeout_us": 0, 00:21:56.111 "timeout_admin_us": 0, 00:21:56.111 "keep_alive_timeout_ms": 10000, 00:21:56.111 "arbitration_burst": 0, 00:21:56.111 "low_priority_weight": 0, 00:21:56.111 "medium_priority_weight": 0, 00:21:56.111 "high_priority_weight": 0, 00:21:56.111 "nvme_adminq_poll_period_us": 10000, 00:21:56.111 "nvme_ioq_poll_period_us": 0, 00:21:56.111 "io_queue_requests": 512, 00:21:56.111 "delay_cmd_submit": true, 00:21:56.111 "transport_retry_count": 4, 00:21:56.111 "bdev_retry_count": 3, 00:21:56.111 "transport_ack_timeout": 0, 00:21:56.111 "ctrlr_loss_timeout_sec": 0, 00:21:56.111 "reconnect_delay_sec": 0, 00:21:56.111 "fast_io_fail_timeout_sec": 0, 00:21:56.111 "disable_auto_failback": false, 00:21:56.111 "generate_uuids": false, 00:21:56.111 "transport_tos": 0, 00:21:56.111 "nvme_error_stat": false, 00:21:56.111 "rdma_srq_size": 0, 00:21:56.111 "io_path_stat": false, 00:21:56.111 "allow_accel_sequence": false, 00:21:56.111 "rdma_max_cq_size": 0, 00:21:56.111 "rdma_cm_event_timeout_ms": 0, 00:21:56.111 "dhchap_digests": [ 00:21:56.111 "sha256", 00:21:56.111 "sha384", 00:21:56.111 "sha512" 00:21:56.111 ], 00:21:56.111 "dhchap_dhgroups": [ 00:21:56.111 "null", 00:21:56.111 "ffdhe2048", 00:21:56.111 "ffdhe3072", 00:21:56.111 "ffdhe4096", 00:21:56.111 "ffdhe6144", 00:21:56.111 "ffdhe8192" 00:21:56.111 ] 00:21:56.111 } 00:21:56.111 }, 00:21:56.111 { 00:21:56.111 "method": "bdev_nvme_attach_controller", 00:21:56.111 "params": { 00:21:56.111 "name": "nvme0", 00:21:56.111 "trtype": "TCP", 00:21:56.111 "adrfam": "IPv4", 00:21:56.111 "traddr": "10.0.0.2", 00:21:56.111 "trsvcid": "4420", 00:21:56.111 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.111 "prchk_reftag": false, 00:21:56.111 "prchk_guard": false, 00:21:56.111 "ctrlr_loss_timeout_sec": 0, 00:21:56.111 "reconnect_delay_sec": 0, 00:21:56.111 "fast_io_fail_timeout_sec": 0, 00:21:56.111 "psk": "key0", 00:21:56.111 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:56.111 "hdgst": false, 00:21:56.111 "ddgst": false, 00:21:56.111 "multipath": "multipath" 00:21:56.111 } 00:21:56.111 }, 00:21:56.111 { 00:21:56.111 "method": "bdev_nvme_set_hotplug", 00:21:56.111 "params": { 00:21:56.111 "period_us": 100000, 00:21:56.111 "enable": false 00:21:56.111 } 00:21:56.111 }, 00:21:56.111 { 00:21:56.111 "method": "bdev_enable_histogram", 00:21:56.111 "params": { 00:21:56.111 "name": "nvme0n1", 00:21:56.111 "enable": true 00:21:56.111 } 00:21:56.111 }, 00:21:56.111 { 00:21:56.111 "method": "bdev_wait_for_examine" 00:21:56.111 } 00:21:56.111 ] 00:21:56.111 }, 00:21:56.111 { 00:21:56.111 "subsystem": "nbd", 00:21:56.111 "config": [] 00:21:56.111 } 00:21:56.111 ] 00:21:56.111 }' 00:21:56.111 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 1723082 00:21:56.111 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1723082 ']' 00:21:56.111 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1723082 00:21:56.111 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:56.111 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.111 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1723082 00:21:56.111 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:56.111 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:56.111 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1723082' 00:21:56.111 killing process with pid 1723082 00:21:56.111 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1723082 00:21:56.111 Received shutdown signal, test time was about 1.000000 seconds 00:21:56.111 00:21:56.111 Latency(us) 00:21:56.111 [2024-11-20T07:19:10.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:56.111 [2024-11-20T07:19:10.139Z] =================================================================================================================== 00:21:56.111 [2024-11-20T07:19:10.139Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:56.111 08:19:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1723082 00:21:56.111 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 1722993 00:21:56.111 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1722993 ']' 00:21:56.111 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1722993 00:21:56.111 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:56.111 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:56.111 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1722993 00:21:56.371 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:56.371 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:56.371 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1722993' 00:21:56.371 killing process with pid 1722993 00:21:56.371 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1722993 00:21:56.371 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1722993 00:21:56.371 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:21:56.371 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:56.371 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:56.371 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:21:56.371 "subsystems": [ 00:21:56.371 { 00:21:56.371 "subsystem": "keyring", 00:21:56.371 "config": [ 00:21:56.371 { 00:21:56.371 "method": "keyring_file_add_key", 00:21:56.371 "params": { 00:21:56.371 "name": "key0", 00:21:56.371 "path": "/tmp/tmp.5dhRnXSWg5" 00:21:56.371 } 00:21:56.371 } 00:21:56.371 ] 00:21:56.371 }, 00:21:56.371 { 00:21:56.371 "subsystem": "iobuf", 00:21:56.371 "config": [ 00:21:56.371 { 00:21:56.371 "method": "iobuf_set_options", 00:21:56.371 "params": { 00:21:56.371 "small_pool_count": 8192, 00:21:56.371 "large_pool_count": 1024, 00:21:56.371 "small_bufsize": 8192, 00:21:56.371 "large_bufsize": 135168, 00:21:56.371 "enable_numa": false 00:21:56.371 } 00:21:56.371 } 00:21:56.371 ] 00:21:56.371 }, 00:21:56.371 { 00:21:56.371 "subsystem": "sock", 00:21:56.371 "config": [ 00:21:56.371 { 00:21:56.371 "method": "sock_set_default_impl", 00:21:56.371 "params": { 00:21:56.371 "impl_name": "posix" 00:21:56.371 } 00:21:56.371 }, 00:21:56.371 { 00:21:56.371 "method": "sock_impl_set_options", 00:21:56.371 "params": { 00:21:56.371 "impl_name": "ssl", 00:21:56.371 "recv_buf_size": 4096, 00:21:56.371 "send_buf_size": 4096, 00:21:56.371 "enable_recv_pipe": true, 00:21:56.371 "enable_quickack": false, 00:21:56.371 "enable_placement_id": 0, 00:21:56.371 "enable_zerocopy_send_server": true, 00:21:56.371 "enable_zerocopy_send_client": false, 00:21:56.371 "zerocopy_threshold": 0, 00:21:56.371 "tls_version": 0, 00:21:56.371 "enable_ktls": false 00:21:56.371 } 00:21:56.371 }, 00:21:56.371 { 00:21:56.371 "method": "sock_impl_set_options", 00:21:56.371 "params": { 00:21:56.371 "impl_name": "posix", 00:21:56.371 "recv_buf_size": 2097152, 00:21:56.371 "send_buf_size": 2097152, 00:21:56.371 "enable_recv_pipe": true, 00:21:56.371 "enable_quickack": false, 00:21:56.371 "enable_placement_id": 0, 00:21:56.371 "enable_zerocopy_send_server": true, 00:21:56.371 "enable_zerocopy_send_client": false, 00:21:56.371 "zerocopy_threshold": 0, 00:21:56.371 "tls_version": 0, 00:21:56.371 "enable_ktls": false 00:21:56.371 } 00:21:56.371 } 00:21:56.371 ] 00:21:56.371 }, 00:21:56.371 { 00:21:56.371 "subsystem": "vmd", 00:21:56.371 "config": [] 00:21:56.371 }, 00:21:56.371 { 00:21:56.371 "subsystem": "accel", 00:21:56.371 "config": [ 00:21:56.371 { 00:21:56.371 "method": "accel_set_options", 00:21:56.371 "params": { 00:21:56.371 "small_cache_size": 128, 00:21:56.371 "large_cache_size": 16, 00:21:56.371 "task_count": 2048, 00:21:56.371 "sequence_count": 2048, 00:21:56.371 "buf_count": 2048 00:21:56.371 } 00:21:56.371 } 00:21:56.371 ] 00:21:56.371 }, 00:21:56.371 { 00:21:56.371 "subsystem": "bdev", 00:21:56.371 "config": [ 00:21:56.371 { 00:21:56.371 "method": "bdev_set_options", 00:21:56.371 "params": { 00:21:56.371 "bdev_io_pool_size": 65535, 00:21:56.371 "bdev_io_cache_size": 256, 00:21:56.371 "bdev_auto_examine": true, 00:21:56.371 "iobuf_small_cache_size": 128, 00:21:56.371 "iobuf_large_cache_size": 16 00:21:56.371 } 00:21:56.371 }, 00:21:56.371 { 00:21:56.371 "method": "bdev_raid_set_options", 00:21:56.371 "params": { 00:21:56.371 "process_window_size_kb": 1024, 00:21:56.371 "process_max_bandwidth_mb_sec": 0 00:21:56.371 } 00:21:56.371 }, 00:21:56.371 { 00:21:56.371 "method": "bdev_iscsi_set_options", 00:21:56.371 "params": { 00:21:56.371 "timeout_sec": 30 00:21:56.371 } 00:21:56.371 }, 00:21:56.371 { 00:21:56.371 "method": "bdev_nvme_set_options", 00:21:56.371 "params": { 00:21:56.371 "action_on_timeout": "none", 00:21:56.371 "timeout_us": 0, 00:21:56.371 "timeout_admin_us": 0, 00:21:56.371 "keep_alive_timeout_ms": 10000, 00:21:56.371 "arbitration_burst": 0, 00:21:56.371 "low_priority_weight": 0, 00:21:56.371 "medium_priority_weight": 0, 00:21:56.371 "high_priority_weight": 0, 00:21:56.371 "nvme_adminq_poll_period_us": 10000, 00:21:56.371 "nvme_ioq_poll_period_us": 0, 00:21:56.371 "io_queue_requests": 0, 00:21:56.371 "delay_cmd_submit": true, 00:21:56.371 "transport_retry_count": 4, 00:21:56.371 "bdev_retry_count": 3, 00:21:56.372 "transport_ack_timeout": 0, 00:21:56.372 "ctrlr_loss_timeout_sec": 0, 00:21:56.372 "reconnect_delay_sec": 0, 00:21:56.372 "fast_io_fail_timeout_sec": 0, 00:21:56.372 "disable_auto_failback": false, 00:21:56.372 "generate_uuids": false, 00:21:56.372 "transport_tos": 0, 00:21:56.372 "nvme_error_stat": false, 00:21:56.372 "rdma_srq_size": 0, 00:21:56.372 "io_path_stat": false, 00:21:56.372 "allow_accel_sequence": false, 00:21:56.372 "rdma_max_cq_size": 0, 00:21:56.372 "rdma_cm_event_timeout_ms": 0, 00:21:56.372 "dhchap_digests": [ 00:21:56.372 "sha256", 00:21:56.372 "sha384", 00:21:56.372 "sha512" 00:21:56.372 ], 00:21:56.372 "dhchap_dhgroups": [ 00:21:56.372 "null", 00:21:56.372 "ffdhe2048", 00:21:56.372 "ffdhe3072", 00:21:56.372 "ffdhe4096", 00:21:56.372 "ffdhe6144", 00:21:56.372 "ffdhe8192" 00:21:56.372 ] 00:21:56.372 } 00:21:56.372 }, 00:21:56.372 { 00:21:56.372 "method": "bdev_nvme_set_hotplug", 00:21:56.372 "params": { 00:21:56.372 "period_us": 100000, 00:21:56.372 "enable": false 00:21:56.372 } 00:21:56.372 }, 00:21:56.372 { 00:21:56.372 "method": "bdev_malloc_create", 00:21:56.372 "params": { 00:21:56.372 "name": "malloc0", 00:21:56.372 "num_blocks": 8192, 00:21:56.372 "block_size": 4096, 00:21:56.372 "physical_block_size": 4096, 00:21:56.372 "uuid": "68f4abef-38a4-4b7e-8e89-3eacbe2e1fe4", 00:21:56.372 "optimal_io_boundary": 0, 00:21:56.372 "md_size": 0, 00:21:56.372 "dif_type": 0, 00:21:56.372 "dif_is_head_of_md": false, 00:21:56.372 "dif_pi_format": 0 00:21:56.372 } 00:21:56.372 }, 00:21:56.372 { 00:21:56.372 "method": "bdev_wait_for_examine" 00:21:56.372 } 00:21:56.372 ] 00:21:56.372 }, 00:21:56.372 { 00:21:56.372 "subsystem": "nbd", 00:21:56.372 "config": [] 00:21:56.372 }, 00:21:56.372 { 00:21:56.372 "subsystem": "scheduler", 00:21:56.372 "config": [ 00:21:56.372 { 00:21:56.372 "method": "framework_set_scheduler", 00:21:56.372 "params": { 00:21:56.372 "name": "static" 00:21:56.372 } 00:21:56.372 } 00:21:56.372 ] 00:21:56.372 }, 00:21:56.372 { 00:21:56.372 "subsystem": "nvmf", 00:21:56.372 "config": [ 00:21:56.372 { 00:21:56.372 "method": "nvmf_set_config", 00:21:56.372 "params": { 00:21:56.372 "discovery_filter": "match_any", 00:21:56.372 "admin_cmd_passthru": { 00:21:56.372 "identify_ctrlr": false 00:21:56.372 }, 00:21:56.372 "dhchap_digests": [ 00:21:56.372 "sha256", 00:21:56.372 "sha384", 00:21:56.372 "sha512" 00:21:56.372 ], 00:21:56.372 "dhchap_dhgroups": [ 00:21:56.372 "null", 00:21:56.372 "ffdhe2048", 00:21:56.372 "ffdhe3072", 00:21:56.372 "ffdhe4096", 00:21:56.372 "ffdhe6144", 00:21:56.372 "ffdhe8192" 00:21:56.372 ] 00:21:56.372 } 00:21:56.372 }, 00:21:56.372 { 00:21:56.372 "method": "nvmf_set_max_subsystems", 00:21:56.372 "params": { 00:21:56.372 "max_subsystems": 1024 00:21:56.372 } 00:21:56.372 }, 00:21:56.372 { 00:21:56.372 "method": "nvmf_set_crdt", 00:21:56.372 "params": { 00:21:56.372 "crdt1": 0, 00:21:56.372 "crdt2": 0, 00:21:56.372 "crdt3": 0 00:21:56.372 } 00:21:56.372 }, 00:21:56.372 { 00:21:56.372 "method": "nvmf_create_transport", 00:21:56.372 "params": { 00:21:56.372 "trtype": "TCP", 00:21:56.372 "max_queue_depth": 128, 00:21:56.372 "max_io_qpairs_per_ctrlr": 127, 00:21:56.372 "in_capsule_data_size": 4096, 00:21:56.372 "max_io_size": 131072, 00:21:56.372 "io_unit_size": 131072, 00:21:56.372 "max_aq_depth": 128, 00:21:56.372 "num_shared_buffers": 511, 00:21:56.372 "buf_cache_size": 4294967295, 00:21:56.372 "dif_insert_or_strip": false, 00:21:56.372 "zcopy": false, 00:21:56.372 "c2h_success": false, 00:21:56.372 "sock_priority": 0, 00:21:56.372 "abort_timeout_sec": 1, 00:21:56.372 "ack_timeout": 0, 00:21:56.372 "data_wr_pool_size": 0 00:21:56.372 } 00:21:56.372 }, 00:21:56.372 { 00:21:56.372 "method": "nvmf_create_subsystem", 00:21:56.372 "params": { 00:21:56.372 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.372 "allow_any_host": false, 00:21:56.372 "serial_number": "00000000000000000000", 00:21:56.372 "model_number": "SPDK bdev Controller", 00:21:56.372 "max_namespaces": 32, 00:21:56.372 "min_cntlid": 1, 00:21:56.372 "max_cntlid": 65519, 00:21:56.372 "ana_reporting": false 00:21:56.372 } 00:21:56.372 }, 00:21:56.372 { 00:21:56.372 "method": "nvmf_subsystem_add_host", 00:21:56.372 "params": { 00:21:56.372 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.372 "host": "nqn.2016-06.io.spdk:host1", 00:21:56.372 "psk": "key0" 00:21:56.372 } 00:21:56.372 }, 00:21:56.372 { 00:21:56.372 "method": "nvmf_subsystem_add_ns", 00:21:56.372 "params": { 00:21:56.372 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.372 "namespace": { 00:21:56.372 "nsid": 1, 00:21:56.372 "bdev_name": "malloc0", 00:21:56.372 "nguid": "68F4ABEF38A44B7E8E893EACBE2E1FE4", 00:21:56.372 "uuid": "68f4abef-38a4-4b7e-8e89-3eacbe2e1fe4", 00:21:56.372 "no_auto_visible": false 00:21:56.372 } 00:21:56.372 } 00:21:56.372 }, 00:21:56.372 { 00:21:56.372 "method": "nvmf_subsystem_add_listener", 00:21:56.372 "params": { 00:21:56.372 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:56.372 "listen_address": { 00:21:56.372 "trtype": "TCP", 00:21:56.372 "adrfam": "IPv4", 00:21:56.372 "traddr": "10.0.0.2", 00:21:56.372 "trsvcid": "4420" 00:21:56.372 }, 00:21:56.372 "secure_channel": false, 00:21:56.372 "sock_impl": "ssl" 00:21:56.372 } 00:21:56.372 } 00:21:56.372 ] 00:21:56.372 } 00:21:56.372 ] 00:21:56.372 }' 00:21:56.372 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.372 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=1723557 00:21:56.372 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:21:56.372 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 1723557 00:21:56.372 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1723557 ']' 00:21:56.372 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.372 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:56.372 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.372 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:56.372 08:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:56.631 [2024-11-20 08:19:10.395410] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:21:56.631 [2024-11-20 08:19:10.395454] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.631 [2024-11-20 08:19:10.474136] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:56.631 [2024-11-20 08:19:10.514273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.631 [2024-11-20 08:19:10.514309] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.631 [2024-11-20 08:19:10.514315] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.631 [2024-11-20 08:19:10.514324] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.631 [2024-11-20 08:19:10.514330] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.631 [2024-11-20 08:19:10.514894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:56.890 [2024-11-20 08:19:10.725876] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.890 [2024-11-20 08:19:10.757895] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:56.890 [2024-11-20 08:19:10.758088] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:57.458 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:57.458 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:57.458 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:57.458 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:57.458 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.458 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.458 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=1723585 00:21:57.458 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 1723585 /var/tmp/bdevperf.sock 00:21:57.458 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 1723585 ']' 00:21:57.458 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:57.458 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:21:57.458 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:57.458 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:57.458 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:21:57.458 "subsystems": [ 00:21:57.458 { 00:21:57.458 "subsystem": "keyring", 00:21:57.458 "config": [ 00:21:57.458 { 00:21:57.458 "method": "keyring_file_add_key", 00:21:57.458 "params": { 00:21:57.458 "name": "key0", 00:21:57.458 "path": "/tmp/tmp.5dhRnXSWg5" 00:21:57.458 } 00:21:57.458 } 00:21:57.458 ] 00:21:57.458 }, 00:21:57.458 { 00:21:57.458 "subsystem": "iobuf", 00:21:57.458 "config": [ 00:21:57.458 { 00:21:57.458 "method": "iobuf_set_options", 00:21:57.458 "params": { 00:21:57.458 "small_pool_count": 8192, 00:21:57.458 "large_pool_count": 1024, 00:21:57.458 "small_bufsize": 8192, 00:21:57.458 "large_bufsize": 135168, 00:21:57.458 "enable_numa": false 00:21:57.458 } 00:21:57.458 } 00:21:57.458 ] 00:21:57.458 }, 00:21:57.458 { 00:21:57.458 "subsystem": "sock", 00:21:57.458 "config": [ 00:21:57.458 { 00:21:57.458 "method": "sock_set_default_impl", 00:21:57.458 "params": { 00:21:57.458 "impl_name": "posix" 00:21:57.458 } 00:21:57.458 }, 00:21:57.458 { 00:21:57.458 "method": "sock_impl_set_options", 00:21:57.458 "params": { 00:21:57.458 "impl_name": "ssl", 00:21:57.458 "recv_buf_size": 4096, 00:21:57.458 "send_buf_size": 4096, 00:21:57.458 "enable_recv_pipe": true, 00:21:57.458 "enable_quickack": false, 00:21:57.458 "enable_placement_id": 0, 00:21:57.458 "enable_zerocopy_send_server": true, 00:21:57.458 "enable_zerocopy_send_client": false, 00:21:57.458 "zerocopy_threshold": 0, 00:21:57.458 "tls_version": 0, 00:21:57.458 "enable_ktls": false 00:21:57.459 } 00:21:57.459 }, 00:21:57.459 { 00:21:57.459 "method": "sock_impl_set_options", 00:21:57.459 "params": { 00:21:57.459 "impl_name": "posix", 00:21:57.459 "recv_buf_size": 2097152, 00:21:57.459 "send_buf_size": 2097152, 00:21:57.459 "enable_recv_pipe": true, 00:21:57.459 "enable_quickack": false, 00:21:57.459 "enable_placement_id": 0, 00:21:57.459 "enable_zerocopy_send_server": true, 00:21:57.459 "enable_zerocopy_send_client": false, 00:21:57.459 "zerocopy_threshold": 0, 00:21:57.459 "tls_version": 0, 00:21:57.459 "enable_ktls": false 00:21:57.459 } 00:21:57.459 } 00:21:57.459 ] 00:21:57.459 }, 00:21:57.459 { 00:21:57.459 "subsystem": "vmd", 00:21:57.459 "config": [] 00:21:57.459 }, 00:21:57.459 { 00:21:57.459 "subsystem": "accel", 00:21:57.459 "config": [ 00:21:57.459 { 00:21:57.459 "method": "accel_set_options", 00:21:57.459 "params": { 00:21:57.459 "small_cache_size": 128, 00:21:57.459 "large_cache_size": 16, 00:21:57.459 "task_count": 2048, 00:21:57.459 "sequence_count": 2048, 00:21:57.459 "buf_count": 2048 00:21:57.459 } 00:21:57.459 } 00:21:57.459 ] 00:21:57.459 }, 00:21:57.459 { 00:21:57.459 "subsystem": "bdev", 00:21:57.459 "config": [ 00:21:57.459 { 00:21:57.459 "method": "bdev_set_options", 00:21:57.459 "params": { 00:21:57.459 "bdev_io_pool_size": 65535, 00:21:57.459 "bdev_io_cache_size": 256, 00:21:57.459 "bdev_auto_examine": true, 00:21:57.459 "iobuf_small_cache_size": 128, 00:21:57.459 "iobuf_large_cache_size": 16 00:21:57.459 } 00:21:57.459 }, 00:21:57.459 { 00:21:57.459 "method": "bdev_raid_set_options", 00:21:57.459 "params": { 00:21:57.459 "process_window_size_kb": 1024, 00:21:57.459 "process_max_bandwidth_mb_sec": 0 00:21:57.459 } 00:21:57.459 }, 00:21:57.459 { 00:21:57.459 "method": "bdev_iscsi_set_options", 00:21:57.459 "params": { 00:21:57.459 "timeout_sec": 30 00:21:57.459 } 00:21:57.459 }, 00:21:57.459 { 00:21:57.459 "method": "bdev_nvme_set_options", 00:21:57.459 "params": { 00:21:57.459 "action_on_timeout": "none", 00:21:57.459 "timeout_us": 0, 00:21:57.459 "timeout_admin_us": 0, 00:21:57.459 "keep_alive_timeout_ms": 10000, 00:21:57.459 "arbitration_burst": 0, 00:21:57.459 "low_priority_weight": 0, 00:21:57.459 "medium_priority_weight": 0, 00:21:57.459 "high_priority_weight": 0, 00:21:57.459 "nvme_adminq_poll_period_us": 10000, 00:21:57.459 "nvme_ioq_poll_period_us": 0, 00:21:57.459 "io_queue_requests": 512, 00:21:57.459 "delay_cmd_submit": true, 00:21:57.459 "transport_retry_count": 4, 00:21:57.459 "bdev_retry_count": 3, 00:21:57.459 "transport_ack_timeout": 0, 00:21:57.459 "ctrlr_loss_timeout_sec": 0, 00:21:57.459 "reconnect_delay_sec": 0, 00:21:57.459 "fast_io_fail_timeout_sec": 0, 00:21:57.459 "disable_auto_failback": false, 00:21:57.459 "generate_uuids": false, 00:21:57.459 "transport_tos": 0, 00:21:57.459 "nvme_error_stat": false, 00:21:57.459 "rdma_srq_size": 0, 00:21:57.459 "io_path_stat": false, 00:21:57.459 "allow_accel_sequence": false, 00:21:57.459 "rdma_max_cq_size": 0, 00:21:57.459 "rdma_cm_event_timeout_ms": 0Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:57.459 , 00:21:57.459 "dhchap_digests": [ 00:21:57.459 "sha256", 00:21:57.459 "sha384", 00:21:57.459 "sha512" 00:21:57.459 ], 00:21:57.459 "dhchap_dhgroups": [ 00:21:57.459 "null", 00:21:57.459 "ffdhe2048", 00:21:57.459 "ffdhe3072", 00:21:57.459 "ffdhe4096", 00:21:57.459 "ffdhe6144", 00:21:57.459 "ffdhe8192" 00:21:57.459 ] 00:21:57.459 } 00:21:57.459 }, 00:21:57.459 { 00:21:57.459 "method": "bdev_nvme_attach_controller", 00:21:57.459 "params": { 00:21:57.459 "name": "nvme0", 00:21:57.459 "trtype": "TCP", 00:21:57.459 "adrfam": "IPv4", 00:21:57.459 "traddr": "10.0.0.2", 00:21:57.459 "trsvcid": "4420", 00:21:57.459 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.459 "prchk_reftag": false, 00:21:57.459 "prchk_guard": false, 00:21:57.459 "ctrlr_loss_timeout_sec": 0, 00:21:57.459 "reconnect_delay_sec": 0, 00:21:57.459 "fast_io_fail_timeout_sec": 0, 00:21:57.459 "psk": "key0", 00:21:57.459 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:57.459 "hdgst": false, 00:21:57.459 "ddgst": false, 00:21:57.459 "multipath": "multipath" 00:21:57.459 } 00:21:57.459 }, 00:21:57.459 { 00:21:57.459 "method": "bdev_nvme_set_hotplug", 00:21:57.459 "params": { 00:21:57.459 "period_us": 100000, 00:21:57.459 "enable": false 00:21:57.459 } 00:21:57.459 }, 00:21:57.459 { 00:21:57.459 "method": "bdev_enable_histogram", 00:21:57.459 "params": { 00:21:57.459 "name": "nvme0n1", 00:21:57.459 "enable": true 00:21:57.459 } 00:21:57.459 }, 00:21:57.459 { 00:21:57.459 "method": "bdev_wait_for_examine" 00:21:57.459 } 00:21:57.459 ] 00:21:57.459 }, 00:21:57.459 { 00:21:57.459 "subsystem": "nbd", 00:21:57.459 "config": [] 00:21:57.459 } 00:21:57.459 ] 00:21:57.459 }' 00:21:57.459 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:57.459 08:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:21:57.459 [2024-11-20 08:19:11.303024] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:21:57.459 [2024-11-20 08:19:11.303076] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1723585 ] 00:21:57.459 [2024-11-20 08:19:11.379076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:57.459 [2024-11-20 08:19:11.423176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:57.718 [2024-11-20 08:19:11.575939] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:58.286 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:58.286 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:21:58.286 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:58.286 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:21:58.545 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.545 08:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:58.545 Running I/O for 1 seconds... 00:21:59.482 5514.00 IOPS, 21.54 MiB/s 00:21:59.482 Latency(us) 00:21:59.482 [2024-11-20T07:19:13.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.482 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:21:59.482 Verification LBA range: start 0x0 length 0x2000 00:21:59.482 nvme0n1 : 1.01 5564.96 21.74 0.00 0.00 22854.84 4868.39 21221.18 00:21:59.482 [2024-11-20T07:19:13.510Z] =================================================================================================================== 00:21:59.482 [2024-11-20T07:19:13.510Z] Total : 5564.96 21.74 0.00 0.00 22854.84 4868.39 21221.18 00:21:59.482 { 00:21:59.482 "results": [ 00:21:59.482 { 00:21:59.482 "job": "nvme0n1", 00:21:59.482 "core_mask": "0x2", 00:21:59.482 "workload": "verify", 00:21:59.482 "status": "finished", 00:21:59.482 "verify_range": { 00:21:59.482 "start": 0, 00:21:59.482 "length": 8192 00:21:59.482 }, 00:21:59.482 "queue_depth": 128, 00:21:59.482 "io_size": 4096, 00:21:59.482 "runtime": 1.013843, 00:21:59.482 "iops": 5564.964200571489, 00:21:59.482 "mibps": 21.738141408482377, 00:21:59.482 "io_failed": 0, 00:21:59.482 "io_timeout": 0, 00:21:59.482 "avg_latency_us": 22854.843390219612, 00:21:59.482 "min_latency_us": 4868.388571428572, 00:21:59.482 "max_latency_us": 21221.180952380953 00:21:59.482 } 00:21:59.482 ], 00:21:59.482 "core_count": 1 00:21:59.482 } 00:21:59.482 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:21:59.482 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:21:59.482 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:21:59.482 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:21:59.482 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:21:59.482 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:21:59.482 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:21:59.482 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:21:59.482 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:21:59.482 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:21:59.482 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:21:59.482 nvmf_trace.0 00:21:59.742 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:21:59.742 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 1723585 00:21:59.742 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1723585 ']' 00:21:59.742 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1723585 00:21:59.742 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:21:59.742 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:59.742 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1723585 00:21:59.742 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:59.742 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:59.742 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1723585' 00:21:59.742 killing process with pid 1723585 00:21:59.742 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1723585 00:21:59.742 Received shutdown signal, test time was about 1.000000 seconds 00:21:59.742 00:21:59.742 Latency(us) 00:21:59.742 [2024-11-20T07:19:13.770Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.742 [2024-11-20T07:19:13.770Z] =================================================================================================================== 00:21:59.742 [2024-11-20T07:19:13.770Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:59.742 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1723585 00:21:59.742 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:21:59.742 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:59.742 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@99 -- # sync 00:22:00.002 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:22:00.002 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@102 -- # set +e 00:22:00.002 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:00.002 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:22:00.002 rmmod nvme_tcp 00:22:00.002 rmmod nvme_fabrics 00:22:00.002 rmmod nvme_keyring 00:22:00.002 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:00.002 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@106 -- # set -e 00:22:00.002 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@107 -- # return 0 00:22:00.002 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # '[' -n 1723557 ']' 00:22:00.002 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@337 -- # killprocess 1723557 00:22:00.002 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 1723557 ']' 00:22:00.002 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 1723557 00:22:00.002 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:22:00.002 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:00.002 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1723557 00:22:00.002 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:00.002 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:00.002 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1723557' 00:22:00.002 killing process with pid 1723557 00:22:00.002 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 1723557 00:22:00.002 08:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 1723557 00:22:00.261 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:00.261 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # nvmf_fini 00:22:00.261 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@254 -- # local dev 00:22:00.261 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@257 -- # remove_target_ns 00:22:00.261 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:00.261 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:00.261 08:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@258 -- # delete_main_bridge 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@121 -- # return 0 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # _dev=0 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # dev_map=() 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@274 -- # iptr 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # iptables-save 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:22:02.321 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@548 -- # iptables-restore 00:22:02.322 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.G7KZOvugoV /tmp/tmp.QjJGuqDt0f /tmp/tmp.5dhRnXSWg5 00:22:02.322 00:22:02.322 real 1m20.977s 00:22:02.322 user 2m4.066s 00:22:02.322 sys 0m30.097s 00:22:02.322 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:02.322 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:02.322 ************************************ 00:22:02.322 END TEST nvmf_tls 00:22:02.322 ************************************ 00:22:02.322 08:19:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:02.322 08:19:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:02.322 08:19:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:02.322 08:19:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:02.322 ************************************ 00:22:02.322 START TEST nvmf_fips 00:22:02.322 ************************************ 00:22:02.322 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:22:02.322 * Looking for test storage... 00:22:02.322 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:22:02.322 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:02.322 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:22:02.322 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:02.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.582 --rc genhtml_branch_coverage=1 00:22:02.582 --rc genhtml_function_coverage=1 00:22:02.582 --rc genhtml_legend=1 00:22:02.582 --rc geninfo_all_blocks=1 00:22:02.582 --rc geninfo_unexecuted_blocks=1 00:22:02.582 00:22:02.582 ' 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:02.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.582 --rc genhtml_branch_coverage=1 00:22:02.582 --rc genhtml_function_coverage=1 00:22:02.582 --rc genhtml_legend=1 00:22:02.582 --rc geninfo_all_blocks=1 00:22:02.582 --rc geninfo_unexecuted_blocks=1 00:22:02.582 00:22:02.582 ' 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:02.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.582 --rc genhtml_branch_coverage=1 00:22:02.582 --rc genhtml_function_coverage=1 00:22:02.582 --rc genhtml_legend=1 00:22:02.582 --rc geninfo_all_blocks=1 00:22:02.582 --rc geninfo_unexecuted_blocks=1 00:22:02.582 00:22:02.582 ' 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:02.582 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:02.582 --rc genhtml_branch_coverage=1 00:22:02.582 --rc genhtml_function_coverage=1 00:22:02.582 --rc genhtml_legend=1 00:22:02.582 --rc geninfo_all_blocks=1 00:22:02.582 --rc geninfo_unexecuted_blocks=1 00:22:02.582 00:22:02.582 ' 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.582 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@50 -- # : 0 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:02.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:22:02.583 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:22:02.583 Error setting digest 00:22:02.583 406217F7957F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:22:02.583 406217F7957F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:22:02.584 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:22:02.584 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:02.584 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:02.584 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:02.584 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:22:02.584 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:22:02.584 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.584 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:02.584 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:02.584 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # remove_target_ns 00:22:02.584 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:02.584 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:02.584 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:02.584 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:22:02.584 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:22:02.584 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # xtrace_disable 00:22:02.584 08:19:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@131 -- # pci_devs=() 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@135 -- # net_devs=() 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@136 -- # e810=() 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@136 -- # local -ga e810 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@137 -- # x722=() 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@137 -- # local -ga x722 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@138 -- # mlx=() 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@138 -- # local -ga mlx 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:09.153 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:09.153 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:09.154 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:09.154 Found net devices under 0000:86:00.0: cvl_0_0 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:09.154 Found net devices under 0000:86:00.1: cvl_0_1 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # is_hw=yes 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@247 -- # create_target_ns 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@28 -- # local -g _dev 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # ips=() 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772161 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:22:09.154 10.0.0.1 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772162 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:22:09.154 10.0.0.2 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:09.154 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@38 -- # ping_ips 1 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:09.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:09.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:22:09.155 00:22:09.155 --- 10.0.0.1 ping statistics --- 00:22:09.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.155 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target0 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target0 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:22:09.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:09.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:22:09.155 00:22:09.155 --- 10.0.0.2 ping statistics --- 00:22:09.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:09.155 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair++ )) 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # return 0 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=initiator1 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # return 1 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev= 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@160 -- # return 0 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target0 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target0 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:09.155 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # get_net_dev target1 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # local dev=target1 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # return 1 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@159 -- # dev= 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@160 -- # return 0 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:22:09.156 ' 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # nvmfpid=1727639 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # waitforlisten 1727639 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1727639 ']' 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:09.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.156 08:19:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:09.156 [2024-11-20 08:19:22.724714] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:22:09.156 [2024-11-20 08:19:22.724762] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:09.156 [2024-11-20 08:19:22.805399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:09.156 [2024-11-20 08:19:22.843877] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:09.156 [2024-11-20 08:19:22.843912] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:09.156 [2024-11-20 08:19:22.843919] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:09.156 [2024-11-20 08:19:22.843924] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:09.156 [2024-11-20 08:19:22.843930] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:09.156 [2024-11-20 08:19:22.844473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:09.724 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:09.724 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:09.724 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:09.724 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:09.724 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:09.724 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:09.724 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:22:09.724 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:09.724 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:22:09.724 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.783 00:22:09.724 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:22:09.724 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.783 00:22:09.724 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.783 00:22:09.724 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.783 00:22:09.724 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:09.983 [2024-11-20 08:19:23.757226] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:09.983 [2024-11-20 08:19:23.773230] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:09.983 [2024-11-20 08:19:23.773423] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:09.983 malloc0 00:22:09.983 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:09.983 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=1727889 00:22:09.983 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:09.983 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 1727889 /var/tmp/bdevperf.sock 00:22:09.983 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 1727889 ']' 00:22:09.983 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:09.983 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:09.983 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:09.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:09.983 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:09.983 08:19:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:09.983 [2024-11-20 08:19:23.901706] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:22:09.983 [2024-11-20 08:19:23.901754] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1727889 ] 00:22:09.983 [2024-11-20 08:19:23.977737] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:10.243 [2024-11-20 08:19:24.020672] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:10.810 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:10.810 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:22:10.810 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.783 00:22:11.069 08:19:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:11.069 [2024-11-20 08:19:25.083902] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:11.328 TLSTESTn1 00:22:11.328 08:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:11.328 Running I/O for 10 seconds... 00:22:13.642 5666.00 IOPS, 22.13 MiB/s [2024-11-20T07:19:28.606Z] 5744.50 IOPS, 22.44 MiB/s [2024-11-20T07:19:29.542Z] 5748.67 IOPS, 22.46 MiB/s [2024-11-20T07:19:30.479Z] 5774.00 IOPS, 22.55 MiB/s [2024-11-20T07:19:31.414Z] 5753.00 IOPS, 22.47 MiB/s [2024-11-20T07:19:32.349Z] 5754.33 IOPS, 22.48 MiB/s [2024-11-20T07:19:33.721Z] 5738.14 IOPS, 22.41 MiB/s [2024-11-20T07:19:34.287Z] 5727.25 IOPS, 22.37 MiB/s [2024-11-20T07:19:35.662Z] 5726.33 IOPS, 22.37 MiB/s [2024-11-20T07:19:35.662Z] 5719.10 IOPS, 22.34 MiB/s 00:22:21.634 Latency(us) 00:22:21.634 [2024-11-20T07:19:35.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.634 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:21.634 Verification LBA range: start 0x0 length 0x2000 00:22:21.634 TLSTESTn1 : 10.02 5722.17 22.35 0.00 0.00 22334.90 6772.05 35951.18 00:22:21.634 [2024-11-20T07:19:35.662Z] =================================================================================================================== 00:22:21.634 [2024-11-20T07:19:35.662Z] Total : 5722.17 22.35 0.00 0.00 22334.90 6772.05 35951.18 00:22:21.634 { 00:22:21.634 "results": [ 00:22:21.634 { 00:22:21.634 "job": "TLSTESTn1", 00:22:21.634 "core_mask": "0x4", 00:22:21.634 "workload": "verify", 00:22:21.634 "status": "finished", 00:22:21.634 "verify_range": { 00:22:21.634 "start": 0, 00:22:21.634 "length": 8192 00:22:21.634 }, 00:22:21.634 "queue_depth": 128, 00:22:21.634 "io_size": 4096, 00:22:21.634 "runtime": 10.016655, 00:22:21.634 "iops": 5722.1697263208125, 00:22:21.634 "mibps": 22.352225493440674, 00:22:21.634 "io_failed": 0, 00:22:21.634 "io_timeout": 0, 00:22:21.634 "avg_latency_us": 22334.89880761712, 00:22:21.634 "min_latency_us": 6772.053333333333, 00:22:21.634 "max_latency_us": 35951.177142857145 00:22:21.634 } 00:22:21.634 ], 00:22:21.634 "core_count": 1 00:22:21.634 } 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:22:21.634 nvmf_trace.0 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 1727889 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1727889 ']' 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1727889 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1727889 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1727889' 00:22:21.634 killing process with pid 1727889 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1727889 00:22:21.634 Received shutdown signal, test time was about 10.000000 seconds 00:22:21.634 00:22:21.634 Latency(us) 00:22:21.634 [2024-11-20T07:19:35.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:21.634 [2024-11-20T07:19:35.662Z] =================================================================================================================== 00:22:21.634 [2024-11-20T07:19:35.662Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1727889 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # nvmfcleanup 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@99 -- # sync 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@102 -- # set +e 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:21.634 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:22:21.634 rmmod nvme_tcp 00:22:21.634 rmmod nvme_fabrics 00:22:21.634 rmmod nvme_keyring 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@106 -- # set -e 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@107 -- # return 0 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # '[' -n 1727639 ']' 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@337 -- # killprocess 1727639 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 1727639 ']' 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 1727639 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1727639 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1727639' 00:22:21.893 killing process with pid 1727639 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 1727639 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 1727639 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # nvmf_fini 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@254 -- # local dev 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@257 -- # remove_target_ns 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:21.893 08:19:35 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@258 -- # delete_main_bridge 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@121 -- # return 0 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # _dev=0 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # dev_map=() 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@274 -- # iptr 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # iptables-save 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@548 -- # iptables-restore 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.783 00:22:24.429 00:22:24.429 real 0m21.789s 00:22:24.429 user 0m23.945s 00:22:24.429 sys 0m9.229s 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:24.429 08:19:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:22:24.429 ************************************ 00:22:24.429 END TEST nvmf_fips 00:22:24.429 ************************************ 00:22:24.429 08:19:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:24.429 08:19:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:24.429 08:19:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:24.429 08:19:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:24.429 ************************************ 00:22:24.429 START TEST nvmf_control_msg_list 00:22:24.429 ************************************ 00:22:24.429 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:22:24.429 * Looking for test storage... 00:22:24.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:24.429 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:24.429 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:22:24.429 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:24.429 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:24.429 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:24.429 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:24.429 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:24.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.430 --rc genhtml_branch_coverage=1 00:22:24.430 --rc genhtml_function_coverage=1 00:22:24.430 --rc genhtml_legend=1 00:22:24.430 --rc geninfo_all_blocks=1 00:22:24.430 --rc geninfo_unexecuted_blocks=1 00:22:24.430 00:22:24.430 ' 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:24.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.430 --rc genhtml_branch_coverage=1 00:22:24.430 --rc genhtml_function_coverage=1 00:22:24.430 --rc genhtml_legend=1 00:22:24.430 --rc geninfo_all_blocks=1 00:22:24.430 --rc geninfo_unexecuted_blocks=1 00:22:24.430 00:22:24.430 ' 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:24.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.430 --rc genhtml_branch_coverage=1 00:22:24.430 --rc genhtml_function_coverage=1 00:22:24.430 --rc genhtml_legend=1 00:22:24.430 --rc geninfo_all_blocks=1 00:22:24.430 --rc geninfo_unexecuted_blocks=1 00:22:24.430 00:22:24.430 ' 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:24.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:24.430 --rc genhtml_branch_coverage=1 00:22:24.430 --rc genhtml_function_coverage=1 00:22:24.430 --rc genhtml_legend=1 00:22:24.430 --rc geninfo_all_blocks=1 00:22:24.430 --rc geninfo_unexecuted_blocks=1 00:22:24.430 00:22:24.430 ' 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@50 -- # : 0 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:24.430 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:22:24.430 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:24.431 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:24.431 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:24.431 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@260 -- # remove_target_ns 00:22:24.431 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:24.431 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:24.431 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:24.431 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:22:24.431 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:22:24.431 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # xtrace_disable 00:22:24.431 08:19:38 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@131 -- # pci_devs=() 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@135 -- # net_devs=() 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@136 -- # e810=() 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@136 -- # local -ga e810 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@137 -- # x722=() 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@137 -- # local -ga x722 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@138 -- # mlx=() 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@138 -- # local -ga mlx 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:31.007 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:31.007 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:31.007 Found net devices under 0000:86:00.0: cvl_0_0 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:31.007 Found net devices under 0000:86:00.1: cvl_0_1 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # is_hw=yes 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@247 -- # create_target_ns 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:31.007 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@28 -- # local -g _dev 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # ips=() 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:22:31.008 08:19:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772161 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:22:31.008 10.0.0.1 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772162 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:22:31.008 10.0.0.2 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@38 -- # ping_ips 1 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:31.008 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:31.008 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.360 ms 00:22:31.008 00:22:31.008 --- 10.0.0.1 ping statistics --- 00:22:31.008 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.008 rtt min/avg/max/mdev = 0.360/0.360/0.360/0.000 ms 00:22:31.008 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target0 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target0 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:22:31.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:31.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:22:31.009 00:22:31.009 --- 10.0.0.2 ping statistics --- 00:22:31.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:31.009 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair++ )) 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@270 -- # return 0 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=initiator1 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # return 1 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev= 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@160 -- # return 0 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target0 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target0 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # get_net_dev target1 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # local dev=target1 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # return 1 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@159 -- # dev= 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@160 -- # return 0 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:22:31.009 ' 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # nvmfpid=1733446 00:22:31.009 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@329 -- # waitforlisten 1733446 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 1733446 ']' 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:31.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:31.010 [2024-11-20 08:19:44.414744] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:22:31.010 [2024-11-20 08:19:44.414792] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:31.010 [2024-11-20 08:19:44.494987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:31.010 [2024-11-20 08:19:44.534915] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:31.010 [2024-11-20 08:19:44.534950] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:31.010 [2024-11-20 08:19:44.534957] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:31.010 [2024-11-20 08:19:44.534963] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:31.010 [2024-11-20 08:19:44.534969] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:31.010 [2024-11-20 08:19:44.535526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:31.010 [2024-11-20 08:19:44.673067] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:31.010 Malloc0 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:31.010 [2024-11-20 08:19:44.713162] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=1733519 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=1733520 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=1733521 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 1733519 00:22:31.010 08:19:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:31.010 [2024-11-20 08:19:44.801880] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:31.010 [2024-11-20 08:19:44.802082] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:31.010 [2024-11-20 08:19:44.802252] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:31.945 Initializing NVMe Controllers 00:22:31.946 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:31.946 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:22:31.946 Initialization complete. Launching workers. 00:22:31.946 ======================================================== 00:22:31.946 Latency(us) 00:22:31.946 Device Information : IOPS MiB/s Average min max 00:22:31.946 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 40897.69 40804.11 41036.92 00:22:31.946 ======================================================== 00:22:31.946 Total : 25.00 0.10 40897.69 40804.11 41036.92 00:22:31.946 00:22:32.205 Initializing NVMe Controllers 00:22:32.205 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:32.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:22:32.205 Initialization complete. Launching workers. 00:22:32.205 ======================================================== 00:22:32.205 Latency(us) 00:22:32.205 Device Information : IOPS MiB/s Average min max 00:22:32.205 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 25.00 0.10 40912.97 40812.40 41291.10 00:22:32.205 ======================================================== 00:22:32.205 Total : 25.00 0.10 40912.97 40812.40 41291.10 00:22:32.205 00:22:32.205 Initializing NVMe Controllers 00:22:32.205 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:32.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:22:32.206 Initialization complete. Launching workers. 00:22:32.206 ======================================================== 00:22:32.206 Latency(us) 00:22:32.206 Device Information : IOPS MiB/s Average min max 00:22:32.206 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40912.91 40833.65 41256.63 00:22:32.206 ======================================================== 00:22:32.206 Total : 25.00 0.10 40912.91 40833.65 41256.63 00:22:32.206 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 1733520 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 1733521 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@335 -- # nvmfcleanup 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@99 -- # sync 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@102 -- # set +e 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:22:32.206 rmmod nvme_tcp 00:22:32.206 rmmod nvme_fabrics 00:22:32.206 rmmod nvme_keyring 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@106 -- # set -e 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@107 -- # return 0 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # '[' -n 1733446 ']' 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@337 -- # killprocess 1733446 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 1733446 ']' 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 1733446 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1733446 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1733446' 00:22:32.206 killing process with pid 1733446 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 1733446 00:22:32.206 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 1733446 00:22:32.466 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:32.466 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # nvmf_fini 00:22:32.466 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@254 -- # local dev 00:22:32.466 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@257 -- # remove_target_ns 00:22:32.466 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:32.466 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:32.466 08:19:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:34.373 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@258 -- # delete_main_bridge 00:22:34.373 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:34.373 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@121 -- # return 0 00:22:34.373 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:34.373 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:22:34.373 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:22:34.373 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:22:34.373 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:22:34.373 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:22:34.373 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:22:34.373 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:22:34.373 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:34.373 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:22:34.373 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:22:34.373 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:22:34.374 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:22:34.374 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:22:34.374 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:22:34.374 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:22:34.374 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:22:34.374 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # _dev=0 00:22:34.374 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # dev_map=() 00:22:34.374 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@274 -- # iptr 00:22:34.374 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # iptables-save 00:22:34.374 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:22:34.374 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@548 -- # iptables-restore 00:22:34.374 00:22:34.374 real 0m10.303s 00:22:34.374 user 0m7.084s 00:22:34.374 sys 0m5.296s 00:22:34.374 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:34.374 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:22:34.374 ************************************ 00:22:34.374 END TEST nvmf_control_msg_list 00:22:34.374 ************************************ 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:34.633 ************************************ 00:22:34.633 START TEST nvmf_wait_for_buf 00:22:34.633 ************************************ 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:22:34.633 * Looking for test storage... 00:22:34.633 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.633 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:34.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.634 --rc genhtml_branch_coverage=1 00:22:34.634 --rc genhtml_function_coverage=1 00:22:34.634 --rc genhtml_legend=1 00:22:34.634 --rc geninfo_all_blocks=1 00:22:34.634 --rc geninfo_unexecuted_blocks=1 00:22:34.634 00:22:34.634 ' 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:34.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.634 --rc genhtml_branch_coverage=1 00:22:34.634 --rc genhtml_function_coverage=1 00:22:34.634 --rc genhtml_legend=1 00:22:34.634 --rc geninfo_all_blocks=1 00:22:34.634 --rc geninfo_unexecuted_blocks=1 00:22:34.634 00:22:34.634 ' 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:34.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.634 --rc genhtml_branch_coverage=1 00:22:34.634 --rc genhtml_function_coverage=1 00:22:34.634 --rc genhtml_legend=1 00:22:34.634 --rc geninfo_all_blocks=1 00:22:34.634 --rc geninfo_unexecuted_blocks=1 00:22:34.634 00:22:34.634 ' 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:34.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.634 --rc genhtml_branch_coverage=1 00:22:34.634 --rc genhtml_function_coverage=1 00:22:34.634 --rc genhtml_legend=1 00:22:34.634 --rc geninfo_all_blocks=1 00:22:34.634 --rc geninfo_unexecuted_blocks=1 00:22:34.634 00:22:34.634 ' 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@50 -- # : 0 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:34.634 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@260 -- # remove_target_ns 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # xtrace_disable 00:22:34.634 08:19:48 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@131 -- # pci_devs=() 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@135 -- # net_devs=() 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@136 -- # e810=() 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@136 -- # local -ga e810 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@137 -- # x722=() 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@137 -- # local -ga x722 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@138 -- # mlx=() 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@138 -- # local -ga mlx 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:41.208 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:41.208 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:22:41.208 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:41.209 Found net devices under 0000:86:00.0: cvl_0_0 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:41.209 Found net devices under 0000:86:00.1: cvl_0_1 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # is_hw=yes 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@247 -- # create_target_ns 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@28 -- # local -g _dev 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # ips=() 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772161 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:22:41.209 10.0.0.1 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772162 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:22:41.209 10.0.0.2 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:41.209 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@38 -- # ping_ips 1 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:41.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:41.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.477 ms 00:22:41.210 00:22:41.210 --- 10.0.0.1 ping statistics --- 00:22:41.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.210 rtt min/avg/max/mdev = 0.477/0.477/0.477/0.000 ms 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target0 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:22:41.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:41.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:22:41.210 00:22:41.210 --- 10.0.0.2 ping statistics --- 00:22:41.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:41.210 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@270 -- # return 0 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # return 1 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev= 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@160 -- # return 0 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target0 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:41.210 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # local dev=target1 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # return 1 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@159 -- # dev= 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@160 -- # return 0 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:22:41.211 ' 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # nvmfpid=1737303 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@329 -- # waitforlisten 1737303 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 1737303 ']' 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:41.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.211 [2024-11-20 08:19:54.793796] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:22:41.211 [2024-11-20 08:19:54.793848] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:41.211 [2024-11-20 08:19:54.871518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:41.211 [2024-11-20 08:19:54.913243] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:41.211 [2024-11-20 08:19:54.913281] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:41.211 [2024-11-20 08:19:54.913289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:41.211 [2024-11-20 08:19:54.913295] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:41.211 [2024-11-20 08:19:54.913300] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:41.211 [2024-11-20 08:19:54.913856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.211 08:19:54 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.211 Malloc0 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.211 [2024-11-20 08:19:55.086075] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:41.211 [2024-11-20 08:19:55.114257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.211 08:19:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:41.211 [2024-11-20 08:19:55.193582] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:22:42.590 Initializing NVMe Controllers 00:22:42.590 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:22:42.590 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:22:42.590 Initialization complete. Launching workers. 00:22:42.590 ======================================================== 00:22:42.590 Latency(us) 00:22:42.590 Device Information : IOPS MiB/s Average min max 00:22:42.590 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 127.93 15.99 32366.20 7275.85 63848.88 00:22:42.590 ======================================================== 00:22:42.590 Total : 127.93 15.99 32366.20 7275.85 63848.88 00:22:42.590 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=2022 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 2022 -eq 0 ]] 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@335 -- # nvmfcleanup 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@99 -- # sync 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@102 -- # set +e 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:22:42.849 rmmod nvme_tcp 00:22:42.849 rmmod nvme_fabrics 00:22:42.849 rmmod nvme_keyring 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@106 -- # set -e 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@107 -- # return 0 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # '[' -n 1737303 ']' 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@337 -- # killprocess 1737303 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 1737303 ']' 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 1737303 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1737303 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1737303' 00:22:42.849 killing process with pid 1737303 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 1737303 00:22:42.849 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 1737303 00:22:43.109 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:43.109 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # nvmf_fini 00:22:43.109 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@254 -- # local dev 00:22:43.109 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@257 -- # remove_target_ns 00:22:43.109 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:43.109 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:43.109 08:19:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:45.014 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@258 -- # delete_main_bridge 00:22:45.014 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:45.014 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@121 -- # return 0 00:22:45.014 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:45.014 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:22:45.014 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:22:45.014 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:22:45.014 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:22:45.014 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:22:45.014 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:22:45.014 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:22:45.014 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:22:45.014 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:22:45.014 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:22:45.014 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:22:45.014 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:22:45.014 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:22:45.014 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:22:45.014 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:22:45.014 08:19:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:22:45.014 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # _dev=0 00:22:45.014 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # dev_map=() 00:22:45.014 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@274 -- # iptr 00:22:45.014 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # iptables-save 00:22:45.014 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:22:45.014 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@548 -- # iptables-restore 00:22:45.014 00:22:45.014 real 0m10.572s 00:22:45.014 user 0m4.041s 00:22:45.014 sys 0m4.969s 00:22:45.014 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:45.014 08:19:59 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:22:45.014 ************************************ 00:22:45.014 END TEST nvmf_wait_for_buf 00:22:45.014 ************************************ 00:22:45.275 08:19:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:22:45.275 08:19:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:22:45.275 08:19:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:22:45.275 08:19:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:22:45.275 08:19:59 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@125 -- # xtrace_disable 00:22:45.275 08:19:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@131 -- # pci_devs=() 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@135 -- # net_devs=() 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@136 -- # e810=() 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@136 -- # local -ga e810 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@137 -- # x722=() 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@137 -- # local -ga x722 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@138 -- # mlx=() 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@138 -- # local -ga mlx 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:51.847 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:51.847 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:51.847 Found net devices under 0000:86:00.0: cvl_0_0 00:22:51.847 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:51.848 Found net devices under 0000:86:00.1: cvl_0_1 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:51.848 ************************************ 00:22:51.848 START TEST nvmf_perf_adq 00:22:51.848 ************************************ 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:22:51.848 * Looking for test storage... 00:22:51.848 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:51.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.848 --rc genhtml_branch_coverage=1 00:22:51.848 --rc genhtml_function_coverage=1 00:22:51.848 --rc genhtml_legend=1 00:22:51.848 --rc geninfo_all_blocks=1 00:22:51.848 --rc geninfo_unexecuted_blocks=1 00:22:51.848 00:22:51.848 ' 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:51.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.848 --rc genhtml_branch_coverage=1 00:22:51.848 --rc genhtml_function_coverage=1 00:22:51.848 --rc genhtml_legend=1 00:22:51.848 --rc geninfo_all_blocks=1 00:22:51.848 --rc geninfo_unexecuted_blocks=1 00:22:51.848 00:22:51.848 ' 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:51.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.848 --rc genhtml_branch_coverage=1 00:22:51.848 --rc genhtml_function_coverage=1 00:22:51.848 --rc genhtml_legend=1 00:22:51.848 --rc geninfo_all_blocks=1 00:22:51.848 --rc geninfo_unexecuted_blocks=1 00:22:51.848 00:22:51.848 ' 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:51.848 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:51.848 --rc genhtml_branch_coverage=1 00:22:51.848 --rc genhtml_function_coverage=1 00:22:51.848 --rc genhtml_legend=1 00:22:51.848 --rc geninfo_all_blocks=1 00:22:51.848 --rc geninfo_unexecuted_blocks=1 00:22:51.848 00:22:51.848 ' 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.848 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:22:51.849 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:51.849 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:22:51.849 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:51.849 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:51.849 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:51.849 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@50 -- # : 0 00:22:51.849 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:51.849 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:51.849 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:51.849 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:51.849 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:51.849 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:51.849 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:51.849 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:51.849 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:51.849 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:51.849 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:22:51.849 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:22:51.849 08:20:04 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:57.124 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:57.124 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:57.124 Found net devices under 0000:86:00.0: cvl_0_0 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:57.124 Found net devices under 0000:86:00.1: cvl_0_1 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:22:57.124 08:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:22:57.693 08:20:11 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:00.226 08:20:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # remove_target_ns 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:05.502 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.502 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:05.503 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:05.503 Found net devices under 0000:86:00.0: cvl_0_0 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:05.503 Found net devices under 0000:86:00.1: cvl_0_1 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # is_hw=yes 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@247 -- # create_target_ns 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@27 -- # local -gA dev_map 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@28 -- # local -g _dev 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # ips=() 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772161 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:23:05.503 10.0.0.1 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772162 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:23:05.503 10.0.0.2 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:05.503 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:23:05.504 08:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@38 -- # ping_ips 1 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:23:05.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.473 ms 00:23:05.504 00:23:05.504 --- 10.0.0.1 ping statistics --- 00:23:05.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.504 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target0 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:23:05.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:23:05.504 00:23:05.504 --- 10.0.0.2 ping statistics --- 00:23:05.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.504 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair++ )) 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # return 0 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator1 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # return 1 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev= 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@160 -- # return 0 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target0 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:23:05.504 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target1 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target1 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # return 1 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev= 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@160 -- # return 0 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:23:05.505 ' 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # nvmfpid=1745667 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # waitforlisten 1745667 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1745667 ']' 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:05.505 08:20:19 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:05.505 [2024-11-20 08:20:19.264236] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:23:05.505 [2024-11-20 08:20:19.264285] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:05.505 [2024-11-20 08:20:19.344785] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:05.505 [2024-11-20 08:20:19.388541] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:05.505 [2024-11-20 08:20:19.388576] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:05.505 [2024-11-20 08:20:19.388583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:05.505 [2024-11-20 08:20:19.388589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:05.505 [2024-11-20 08:20:19.388594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:05.505 [2024-11-20 08:20:19.389999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:05.505 [2024-11-20 08:20:19.390110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:05.505 [2024-11-20 08:20:19.390142] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:05.505 [2024-11-20 08:20:19.390143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.442 [2024-11-20 08:20:20.281026] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.442 Malloc1 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.442 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.443 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.443 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:06.443 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:06.443 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:06.443 [2024-11-20 08:20:20.342006] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:06.443 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:06.443 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=1745902 00:23:06.443 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:23:06.443 08:20:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:08.349 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:23:08.349 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:08.349 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:08.609 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:08.609 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:23:08.609 "tick_rate": 2100000000, 00:23:08.609 "poll_groups": [ 00:23:08.609 { 00:23:08.609 "name": "nvmf_tgt_poll_group_000", 00:23:08.609 "admin_qpairs": 1, 00:23:08.609 "io_qpairs": 1, 00:23:08.609 "current_admin_qpairs": 1, 00:23:08.609 "current_io_qpairs": 1, 00:23:08.609 "pending_bdev_io": 0, 00:23:08.609 "completed_nvme_io": 20348, 00:23:08.609 "transports": [ 00:23:08.609 { 00:23:08.609 "trtype": "TCP" 00:23:08.609 } 00:23:08.609 ] 00:23:08.609 }, 00:23:08.609 { 00:23:08.609 "name": "nvmf_tgt_poll_group_001", 00:23:08.609 "admin_qpairs": 0, 00:23:08.609 "io_qpairs": 1, 00:23:08.609 "current_admin_qpairs": 0, 00:23:08.609 "current_io_qpairs": 1, 00:23:08.609 "pending_bdev_io": 0, 00:23:08.609 "completed_nvme_io": 20603, 00:23:08.609 "transports": [ 00:23:08.609 { 00:23:08.609 "trtype": "TCP" 00:23:08.609 } 00:23:08.609 ] 00:23:08.609 }, 00:23:08.609 { 00:23:08.609 "name": "nvmf_tgt_poll_group_002", 00:23:08.609 "admin_qpairs": 0, 00:23:08.609 "io_qpairs": 1, 00:23:08.609 "current_admin_qpairs": 0, 00:23:08.609 "current_io_qpairs": 1, 00:23:08.609 "pending_bdev_io": 0, 00:23:08.609 "completed_nvme_io": 20290, 00:23:08.609 "transports": [ 00:23:08.609 { 00:23:08.609 "trtype": "TCP" 00:23:08.609 } 00:23:08.609 ] 00:23:08.609 }, 00:23:08.609 { 00:23:08.609 "name": "nvmf_tgt_poll_group_003", 00:23:08.609 "admin_qpairs": 0, 00:23:08.609 "io_qpairs": 1, 00:23:08.609 "current_admin_qpairs": 0, 00:23:08.609 "current_io_qpairs": 1, 00:23:08.609 "pending_bdev_io": 0, 00:23:08.609 "completed_nvme_io": 20344, 00:23:08.609 "transports": [ 00:23:08.609 { 00:23:08.609 "trtype": "TCP" 00:23:08.609 } 00:23:08.609 ] 00:23:08.609 } 00:23:08.609 ] 00:23:08.609 }' 00:23:08.609 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:23:08.609 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:23:08.609 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:23:08.609 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:23:08.609 08:20:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 1745902 00:23:16.731 Initializing NVMe Controllers 00:23:16.732 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:16.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:16.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:16.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:16.732 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:16.732 Initialization complete. Launching workers. 00:23:16.732 ======================================================== 00:23:16.732 Latency(us) 00:23:16.732 Device Information : IOPS MiB/s Average min max 00:23:16.732 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10728.40 41.91 5965.44 1490.54 9965.70 00:23:16.732 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10904.20 42.59 5870.06 2439.29 10564.53 00:23:16.732 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10743.40 41.97 5956.47 2269.13 12671.60 00:23:16.732 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10789.40 42.15 5931.60 2113.28 10322.52 00:23:16.732 ======================================================== 00:23:16.732 Total : 43165.40 168.61 5930.65 1490.54 12671.60 00:23:16.732 00:23:16.732 [2024-11-20 08:20:30.498589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbff520 is same with the state(6) to be set 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@99 -- # sync 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@102 -- # set +e 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:16.732 rmmod nvme_tcp 00:23:16.732 rmmod nvme_fabrics 00:23:16.732 rmmod nvme_keyring 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@106 -- # set -e 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@107 -- # return 0 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # '[' -n 1745667 ']' 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@337 -- # killprocess 1745667 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1745667 ']' 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1745667 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1745667 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1745667' 00:23:16.732 killing process with pid 1745667 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1745667 00:23:16.732 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1745667 00:23:16.991 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:16.991 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # nvmf_fini 00:23:16.991 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@254 -- # local dev 00:23:16.991 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@257 -- # remove_target_ns 00:23:16.991 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:16.991 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:16.991 08:20:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@258 -- # delete_main_bridge 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@121 -- # return 0 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # _dev=0 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # dev_map=() 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@274 -- # iptr 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # iptables-save 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # iptables-restore 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:23:18.896 08:20:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:23:20.274 08:20:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:23:22.174 08:20:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # remove_target_ns 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:27.449 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:27.449 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:27.449 Found net devices under 0000:86:00.0: cvl_0_0 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:27.449 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:27.450 Found net devices under 0000:86:00.1: cvl_0_1 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # is_hw=yes 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@247 -- # create_target_ns 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@27 -- # local -gA dev_map 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@28 -- # local -g _dev 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # ips=() 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772161 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:23:27.450 10.0.0.1 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772162 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:23:27.450 10.0.0.2 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@38 -- # ping_ips 1 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:27.450 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:23:27.451 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:27.451 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:23:27.451 00:23:27.451 --- 10.0.0.1 ping statistics --- 00:23:27.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.451 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target0 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:23:27.451 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:27.451 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.080 ms 00:23:27.451 00:23:27.451 --- 10.0.0.2 ping statistics --- 00:23:27.451 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:27.451 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair++ )) 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # return 0 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=initiator1 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # return 1 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev= 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@160 -- # return 0 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:27.451 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target0 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # get_net_dev target1 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # local dev=target1 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # return 1 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@159 -- # dev= 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@160 -- # return 0 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:23:27.452 ' 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:23:27.452 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:23:27.711 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:23:27.711 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec nvmf_ns_spdk ethtool --offload cvl_0_1 hw-tc-offload on 00:23:27.711 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec nvmf_ns_spdk ethtool --set-priv-flags cvl_0_1 channel-pkt-inspect-optimize off 00:23:27.711 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:23:27.711 net.core.busy_poll = 1 00:23:27.711 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:23:27.711 net.core.busy_read = 1 00:23:27.711 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:23:27.711 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_1 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:23:27.711 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_1 ingress 00:23:27.711 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc filter add dev cvl_0_1 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:23:27.711 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_1 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # nvmfpid=1749709 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # waitforlisten 1749709 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 1749709 ']' 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:27.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:27.971 [2024-11-20 08:20:41.808168] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:23:27.971 [2024-11-20 08:20:41.808237] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:27.971 [2024-11-20 08:20:41.886361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:27.971 [2024-11-20 08:20:41.928797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:27.971 [2024-11-20 08:20:41.928832] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:27.971 [2024-11-20 08:20:41.928840] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:27.971 [2024-11-20 08:20:41.928846] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:27.971 [2024-11-20 08:20:41.928851] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:27.971 [2024-11-20 08:20:41.930434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:27.971 [2024-11-20 08:20:41.930544] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:27.971 [2024-11-20 08:20:41.930632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:27.971 [2024-11-20 08:20:41.930633] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:23:27.971 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:23:28.229 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:23:28.229 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.229 08:20:41 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.229 [2024-11-20 08:20:42.127687] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.229 Malloc1 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:28.229 [2024-11-20 08:20:42.188738] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=1749759 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:23:28.229 08:20:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:23:30.759 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:23:30.759 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:30.759 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:30.759 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:30.759 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:23:30.759 "tick_rate": 2100000000, 00:23:30.759 "poll_groups": [ 00:23:30.759 { 00:23:30.759 "name": "nvmf_tgt_poll_group_000", 00:23:30.759 "admin_qpairs": 1, 00:23:30.759 "io_qpairs": 2, 00:23:30.759 "current_admin_qpairs": 1, 00:23:30.759 "current_io_qpairs": 2, 00:23:30.759 "pending_bdev_io": 0, 00:23:30.759 "completed_nvme_io": 28155, 00:23:30.759 "transports": [ 00:23:30.759 { 00:23:30.759 "trtype": "TCP" 00:23:30.759 } 00:23:30.759 ] 00:23:30.759 }, 00:23:30.759 { 00:23:30.759 "name": "nvmf_tgt_poll_group_001", 00:23:30.759 "admin_qpairs": 0, 00:23:30.759 "io_qpairs": 2, 00:23:30.759 "current_admin_qpairs": 0, 00:23:30.759 "current_io_qpairs": 2, 00:23:30.759 "pending_bdev_io": 0, 00:23:30.759 "completed_nvme_io": 28517, 00:23:30.759 "transports": [ 00:23:30.759 { 00:23:30.759 "trtype": "TCP" 00:23:30.759 } 00:23:30.759 ] 00:23:30.759 }, 00:23:30.759 { 00:23:30.759 "name": "nvmf_tgt_poll_group_002", 00:23:30.759 "admin_qpairs": 0, 00:23:30.759 "io_qpairs": 0, 00:23:30.759 "current_admin_qpairs": 0, 00:23:30.759 "current_io_qpairs": 0, 00:23:30.759 "pending_bdev_io": 0, 00:23:30.759 "completed_nvme_io": 0, 00:23:30.759 "transports": [ 00:23:30.759 { 00:23:30.759 "trtype": "TCP" 00:23:30.759 } 00:23:30.759 ] 00:23:30.759 }, 00:23:30.759 { 00:23:30.759 "name": "nvmf_tgt_poll_group_003", 00:23:30.759 "admin_qpairs": 0, 00:23:30.759 "io_qpairs": 0, 00:23:30.759 "current_admin_qpairs": 0, 00:23:30.759 "current_io_qpairs": 0, 00:23:30.759 "pending_bdev_io": 0, 00:23:30.759 "completed_nvme_io": 0, 00:23:30.759 "transports": [ 00:23:30.759 { 00:23:30.759 "trtype": "TCP" 00:23:30.759 } 00:23:30.759 ] 00:23:30.759 } 00:23:30.759 ] 00:23:30.759 }' 00:23:30.759 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:23:30.759 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:23:30.759 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:23:30.759 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:23:30.759 08:20:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 1749759 00:23:38.870 Initializing NVMe Controllers 00:23:38.870 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:38.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:23:38.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:23:38.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:23:38.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:23:38.870 Initialization complete. Launching workers. 00:23:38.870 ======================================================== 00:23:38.870 Latency(us) 00:23:38.870 Device Information : IOPS MiB/s Average min max 00:23:38.870 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7428.10 29.02 8641.46 1375.72 53436.19 00:23:38.871 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7571.90 29.58 8452.40 1473.64 54410.41 00:23:38.871 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7709.40 30.11 8302.10 1473.89 52178.53 00:23:38.871 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 7363.40 28.76 8692.21 1562.83 53176.23 00:23:38.871 ======================================================== 00:23:38.871 Total : 30072.80 117.47 8519.29 1375.72 54410.41 00:23:38.871 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@99 -- # sync 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@102 -- # set +e 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:38.871 rmmod nvme_tcp 00:23:38.871 rmmod nvme_fabrics 00:23:38.871 rmmod nvme_keyring 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@106 -- # set -e 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@107 -- # return 0 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # '[' -n 1749709 ']' 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@337 -- # killprocess 1749709 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 1749709 ']' 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 1749709 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1749709 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1749709' 00:23:38.871 killing process with pid 1749709 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 1749709 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 1749709 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # nvmf_fini 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@254 -- # local dev 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@257 -- # remove_target_ns 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:38.871 08:20:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@258 -- # delete_main_bridge 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@121 -- # return 0 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # _dev=0 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # dev_map=() 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@274 -- # iptr 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # iptables-save 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@548 -- # iptables-restore 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:23:40.790 00:23:40.790 real 0m50.056s 00:23:40.790 user 2m46.845s 00:23:40.790 sys 0m10.477s 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:23:40.790 ************************************ 00:23:40.790 END TEST nvmf_perf_adq 00:23:40.790 ************************************ 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:40.790 08:20:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:23:41.050 ************************************ 00:23:41.050 START TEST nvmf_shutdown 00:23:41.050 ************************************ 00:23:41.050 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:23:41.050 * Looking for test storage... 00:23:41.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:23:41.050 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:41.050 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:23:41.050 08:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:41.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.050 --rc genhtml_branch_coverage=1 00:23:41.050 --rc genhtml_function_coverage=1 00:23:41.050 --rc genhtml_legend=1 00:23:41.050 --rc geninfo_all_blocks=1 00:23:41.050 --rc geninfo_unexecuted_blocks=1 00:23:41.050 00:23:41.050 ' 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:41.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.050 --rc genhtml_branch_coverage=1 00:23:41.050 --rc genhtml_function_coverage=1 00:23:41.050 --rc genhtml_legend=1 00:23:41.050 --rc geninfo_all_blocks=1 00:23:41.050 --rc geninfo_unexecuted_blocks=1 00:23:41.050 00:23:41.050 ' 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:41.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.050 --rc genhtml_branch_coverage=1 00:23:41.050 --rc genhtml_function_coverage=1 00:23:41.050 --rc genhtml_legend=1 00:23:41.050 --rc geninfo_all_blocks=1 00:23:41.050 --rc geninfo_unexecuted_blocks=1 00:23:41.050 00:23:41.050 ' 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:41.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:41.050 --rc genhtml_branch_coverage=1 00:23:41.050 --rc genhtml_function_coverage=1 00:23:41.050 --rc genhtml_legend=1 00:23:41.050 --rc geninfo_all_blocks=1 00:23:41.050 --rc geninfo_unexecuted_blocks=1 00:23:41.050 00:23:41.050 ' 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:41.050 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@50 -- # : 0 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:23:41.051 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@54 -- # have_pci_nics=0 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:41.051 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:41.311 ************************************ 00:23:41.311 START TEST nvmf_shutdown_tc1 00:23:41.311 ************************************ 00:23:41.311 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:23:41.311 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:23:41.311 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:41.311 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:41.311 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:41.311 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:41.311 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:41.311 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # remove_target_ns 00:23:41.311 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:41.311 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:41.311 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:41.311 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:23:41.311 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:23:41.311 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # xtrace_disable 00:23:41.311 08:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@131 -- # pci_devs=() 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@131 -- # local -a pci_devs 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@133 -- # pci_drivers=() 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@135 -- # net_devs=() 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@135 -- # local -ga net_devs 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@136 -- # e810=() 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@136 -- # local -ga e810 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@137 -- # x722=() 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@137 -- # local -ga x722 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@138 -- # mlx=() 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@138 -- # local -ga mlx 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:48.043 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:48.043 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:48.043 Found net devices under 0000:86:00.0: cvl_0_0 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:48.043 Found net devices under 0000:86:00.1: cvl_0_1 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # is_hw=yes 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:23:48.043 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@247 -- # create_target_ns 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@28 -- # local -g _dev 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@44 -- # ips=() 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@11 -- # local val=167772161 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:23:48.044 10.0.0.1 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@11 -- # local val=167772162 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:23:48.044 10.0.0.2 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:23:48.044 08:21:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@38 -- # ping_ips 1 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:48.044 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:23:48.045 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:48.045 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.501 ms 00:23:48.045 00:23:48.045 --- 10.0.0.1 ping statistics --- 00:23:48.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.045 rtt min/avg/max/mdev = 0.501/0.501/0.501/0.000 ms 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=target0 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:23:48.045 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:48.045 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:23:48.045 00:23:48.045 --- 10.0.0.2 ping statistics --- 00:23:48.045 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:48.045 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # (( pair++ )) 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # return 0 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=initiator1 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # return 1 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev= 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@160 -- # return 0 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=target0 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:23:48.045 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # get_net_dev target1 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # local dev=target1 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # return 1 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@159 -- # dev= 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@160 -- # return 0 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:23:48.046 ' 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # nvmfpid=1755094 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # waitforlisten 1755094 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1755094 ']' 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:48.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.046 08:21:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:48.046 [2024-11-20 08:21:01.263817] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:23:48.046 [2024-11-20 08:21:01.263866] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:48.046 [2024-11-20 08:21:01.344088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:48.046 [2024-11-20 08:21:01.385888] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:48.046 [2024-11-20 08:21:01.385926] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:48.046 [2024-11-20 08:21:01.385933] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:48.046 [2024-11-20 08:21:01.385939] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:48.046 [2024-11-20 08:21:01.385944] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:48.046 [2024-11-20 08:21:01.387553] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:48.046 [2024-11-20 08:21:01.387663] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:48.046 [2024-11-20 08:21:01.387769] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:48.046 [2024-11-20 08:21:01.387770] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:48.306 [2024-11-20 08:21:02.146162] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:48.306 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:48.306 Malloc1 00:23:48.306 [2024-11-20 08:21:02.259307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:48.306 Malloc2 00:23:48.306 Malloc3 00:23:48.565 Malloc4 00:23:48.565 Malloc5 00:23:48.565 Malloc6 00:23:48.565 Malloc7 00:23:48.565 Malloc8 00:23:48.565 Malloc9 00:23:48.825 Malloc10 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=1755416 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 1755416 /var/tmp/bdevperf.sock 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 1755416 ']' 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # config=() 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # local subsystem config 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:48.825 { 00:23:48.825 "params": { 00:23:48.825 "name": "Nvme$subsystem", 00:23:48.825 "trtype": "$TEST_TRANSPORT", 00:23:48.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.825 "adrfam": "ipv4", 00:23:48.825 "trsvcid": "$NVMF_PORT", 00:23:48.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.825 "hdgst": ${hdgst:-false}, 00:23:48.825 "ddgst": ${ddgst:-false} 00:23:48.825 }, 00:23:48.825 "method": "bdev_nvme_attach_controller" 00:23:48.825 } 00:23:48.825 EOF 00:23:48.825 )") 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:48.825 { 00:23:48.825 "params": { 00:23:48.825 "name": "Nvme$subsystem", 00:23:48.825 "trtype": "$TEST_TRANSPORT", 00:23:48.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.825 "adrfam": "ipv4", 00:23:48.825 "trsvcid": "$NVMF_PORT", 00:23:48.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.825 "hdgst": ${hdgst:-false}, 00:23:48.825 "ddgst": ${ddgst:-false} 00:23:48.825 }, 00:23:48.825 "method": "bdev_nvme_attach_controller" 00:23:48.825 } 00:23:48.825 EOF 00:23:48.825 )") 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:48.825 { 00:23:48.825 "params": { 00:23:48.825 "name": "Nvme$subsystem", 00:23:48.825 "trtype": "$TEST_TRANSPORT", 00:23:48.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.825 "adrfam": "ipv4", 00:23:48.825 "trsvcid": "$NVMF_PORT", 00:23:48.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.825 "hdgst": ${hdgst:-false}, 00:23:48.825 "ddgst": ${ddgst:-false} 00:23:48.825 }, 00:23:48.825 "method": "bdev_nvme_attach_controller" 00:23:48.825 } 00:23:48.825 EOF 00:23:48.825 )") 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:48.825 { 00:23:48.825 "params": { 00:23:48.825 "name": "Nvme$subsystem", 00:23:48.825 "trtype": "$TEST_TRANSPORT", 00:23:48.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.825 "adrfam": "ipv4", 00:23:48.825 "trsvcid": "$NVMF_PORT", 00:23:48.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.825 "hdgst": ${hdgst:-false}, 00:23:48.825 "ddgst": ${ddgst:-false} 00:23:48.825 }, 00:23:48.825 "method": "bdev_nvme_attach_controller" 00:23:48.825 } 00:23:48.825 EOF 00:23:48.825 )") 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:48.825 { 00:23:48.825 "params": { 00:23:48.825 "name": "Nvme$subsystem", 00:23:48.825 "trtype": "$TEST_TRANSPORT", 00:23:48.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.825 "adrfam": "ipv4", 00:23:48.825 "trsvcid": "$NVMF_PORT", 00:23:48.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.825 "hdgst": ${hdgst:-false}, 00:23:48.825 "ddgst": ${ddgst:-false} 00:23:48.825 }, 00:23:48.825 "method": "bdev_nvme_attach_controller" 00:23:48.825 } 00:23:48.825 EOF 00:23:48.825 )") 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:48.825 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:48.825 { 00:23:48.825 "params": { 00:23:48.825 "name": "Nvme$subsystem", 00:23:48.825 "trtype": "$TEST_TRANSPORT", 00:23:48.825 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.825 "adrfam": "ipv4", 00:23:48.825 "trsvcid": "$NVMF_PORT", 00:23:48.825 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.825 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.825 "hdgst": ${hdgst:-false}, 00:23:48.825 "ddgst": ${ddgst:-false} 00:23:48.825 }, 00:23:48.825 "method": "bdev_nvme_attach_controller" 00:23:48.826 } 00:23:48.826 EOF 00:23:48.826 )") 00:23:48.826 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:48.826 [2024-11-20 08:21:02.728190] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:23:48.826 [2024-11-20 08:21:02.728246] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:23:48.826 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:48.826 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:48.826 { 00:23:48.826 "params": { 00:23:48.826 "name": "Nvme$subsystem", 00:23:48.826 "trtype": "$TEST_TRANSPORT", 00:23:48.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.826 "adrfam": "ipv4", 00:23:48.826 "trsvcid": "$NVMF_PORT", 00:23:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.826 "hdgst": ${hdgst:-false}, 00:23:48.826 "ddgst": ${ddgst:-false} 00:23:48.826 }, 00:23:48.826 "method": "bdev_nvme_attach_controller" 00:23:48.826 } 00:23:48.826 EOF 00:23:48.826 )") 00:23:48.826 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:48.826 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:48.826 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:48.826 { 00:23:48.826 "params": { 00:23:48.826 "name": "Nvme$subsystem", 00:23:48.826 "trtype": "$TEST_TRANSPORT", 00:23:48.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.826 "adrfam": "ipv4", 00:23:48.826 "trsvcid": "$NVMF_PORT", 00:23:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.826 "hdgst": ${hdgst:-false}, 00:23:48.826 "ddgst": ${ddgst:-false} 00:23:48.826 }, 00:23:48.826 "method": "bdev_nvme_attach_controller" 00:23:48.826 } 00:23:48.826 EOF 00:23:48.826 )") 00:23:48.826 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:48.826 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:48.826 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:48.826 { 00:23:48.826 "params": { 00:23:48.826 "name": "Nvme$subsystem", 00:23:48.826 "trtype": "$TEST_TRANSPORT", 00:23:48.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.826 "adrfam": "ipv4", 00:23:48.826 "trsvcid": "$NVMF_PORT", 00:23:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.826 "hdgst": ${hdgst:-false}, 00:23:48.826 "ddgst": ${ddgst:-false} 00:23:48.826 }, 00:23:48.826 "method": "bdev_nvme_attach_controller" 00:23:48.826 } 00:23:48.826 EOF 00:23:48.826 )") 00:23:48.826 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:48.826 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:48.826 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:48.826 { 00:23:48.826 "params": { 00:23:48.826 "name": "Nvme$subsystem", 00:23:48.826 "trtype": "$TEST_TRANSPORT", 00:23:48.826 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:48.826 "adrfam": "ipv4", 00:23:48.826 "trsvcid": "$NVMF_PORT", 00:23:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:48.826 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:48.826 "hdgst": ${hdgst:-false}, 00:23:48.826 "ddgst": ${ddgst:-false} 00:23:48.826 }, 00:23:48.826 "method": "bdev_nvme_attach_controller" 00:23:48.826 } 00:23:48.826 EOF 00:23:48.826 )") 00:23:48.826 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:48.826 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # jq . 00:23:48.826 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@397 -- # IFS=, 00:23:48.826 08:21:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:23:48.826 "params": { 00:23:48.826 "name": "Nvme1", 00:23:48.826 "trtype": "tcp", 00:23:48.826 "traddr": "10.0.0.2", 00:23:48.826 "adrfam": "ipv4", 00:23:48.826 "trsvcid": "4420", 00:23:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:48.826 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:48.826 "hdgst": false, 00:23:48.826 "ddgst": false 00:23:48.826 }, 00:23:48.826 "method": "bdev_nvme_attach_controller" 00:23:48.826 },{ 00:23:48.826 "params": { 00:23:48.826 "name": "Nvme2", 00:23:48.826 "trtype": "tcp", 00:23:48.826 "traddr": "10.0.0.2", 00:23:48.826 "adrfam": "ipv4", 00:23:48.826 "trsvcid": "4420", 00:23:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:48.826 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:48.826 "hdgst": false, 00:23:48.826 "ddgst": false 00:23:48.826 }, 00:23:48.826 "method": "bdev_nvme_attach_controller" 00:23:48.826 },{ 00:23:48.826 "params": { 00:23:48.826 "name": "Nvme3", 00:23:48.826 "trtype": "tcp", 00:23:48.826 "traddr": "10.0.0.2", 00:23:48.826 "adrfam": "ipv4", 00:23:48.826 "trsvcid": "4420", 00:23:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:48.826 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:48.826 "hdgst": false, 00:23:48.826 "ddgst": false 00:23:48.826 }, 00:23:48.826 "method": "bdev_nvme_attach_controller" 00:23:48.826 },{ 00:23:48.826 "params": { 00:23:48.826 "name": "Nvme4", 00:23:48.826 "trtype": "tcp", 00:23:48.826 "traddr": "10.0.0.2", 00:23:48.826 "adrfam": "ipv4", 00:23:48.826 "trsvcid": "4420", 00:23:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:48.826 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:48.826 "hdgst": false, 00:23:48.826 "ddgst": false 00:23:48.826 }, 00:23:48.826 "method": "bdev_nvme_attach_controller" 00:23:48.826 },{ 00:23:48.826 "params": { 00:23:48.826 "name": "Nvme5", 00:23:48.826 "trtype": "tcp", 00:23:48.826 "traddr": "10.0.0.2", 00:23:48.826 "adrfam": "ipv4", 00:23:48.826 "trsvcid": "4420", 00:23:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:48.826 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:48.826 "hdgst": false, 00:23:48.826 "ddgst": false 00:23:48.826 }, 00:23:48.826 "method": "bdev_nvme_attach_controller" 00:23:48.826 },{ 00:23:48.826 "params": { 00:23:48.826 "name": "Nvme6", 00:23:48.826 "trtype": "tcp", 00:23:48.826 "traddr": "10.0.0.2", 00:23:48.826 "adrfam": "ipv4", 00:23:48.826 "trsvcid": "4420", 00:23:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:48.826 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:48.826 "hdgst": false, 00:23:48.826 "ddgst": false 00:23:48.826 }, 00:23:48.826 "method": "bdev_nvme_attach_controller" 00:23:48.826 },{ 00:23:48.826 "params": { 00:23:48.826 "name": "Nvme7", 00:23:48.826 "trtype": "tcp", 00:23:48.826 "traddr": "10.0.0.2", 00:23:48.826 "adrfam": "ipv4", 00:23:48.826 "trsvcid": "4420", 00:23:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:48.826 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:48.826 "hdgst": false, 00:23:48.826 "ddgst": false 00:23:48.826 }, 00:23:48.826 "method": "bdev_nvme_attach_controller" 00:23:48.826 },{ 00:23:48.826 "params": { 00:23:48.826 "name": "Nvme8", 00:23:48.826 "trtype": "tcp", 00:23:48.826 "traddr": "10.0.0.2", 00:23:48.826 "adrfam": "ipv4", 00:23:48.826 "trsvcid": "4420", 00:23:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:48.826 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:48.826 "hdgst": false, 00:23:48.826 "ddgst": false 00:23:48.826 }, 00:23:48.826 "method": "bdev_nvme_attach_controller" 00:23:48.826 },{ 00:23:48.826 "params": { 00:23:48.826 "name": "Nvme9", 00:23:48.826 "trtype": "tcp", 00:23:48.826 "traddr": "10.0.0.2", 00:23:48.826 "adrfam": "ipv4", 00:23:48.826 "trsvcid": "4420", 00:23:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:48.826 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:48.826 "hdgst": false, 00:23:48.826 "ddgst": false 00:23:48.826 }, 00:23:48.826 "method": "bdev_nvme_attach_controller" 00:23:48.826 },{ 00:23:48.826 "params": { 00:23:48.826 "name": "Nvme10", 00:23:48.826 "trtype": "tcp", 00:23:48.826 "traddr": "10.0.0.2", 00:23:48.826 "adrfam": "ipv4", 00:23:48.826 "trsvcid": "4420", 00:23:48.826 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:48.826 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:48.826 "hdgst": false, 00:23:48.826 "ddgst": false 00:23:48.826 }, 00:23:48.826 "method": "bdev_nvme_attach_controller" 00:23:48.826 }' 00:23:48.826 [2024-11-20 08:21:02.802946] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.826 [2024-11-20 08:21:02.843973] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.733 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.733 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:23:50.733 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:23:50.733 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:50.733 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:50.733 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:50.733 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 1755416 00:23:50.733 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:23:50.733 08:21:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:23:51.670 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 1755416 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:23:51.670 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 1755094 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # config=() 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # local subsystem config 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:51.671 { 00:23:51.671 "params": { 00:23:51.671 "name": "Nvme$subsystem", 00:23:51.671 "trtype": "$TEST_TRANSPORT", 00:23:51.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.671 "adrfam": "ipv4", 00:23:51.671 "trsvcid": "$NVMF_PORT", 00:23:51.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.671 "hdgst": ${hdgst:-false}, 00:23:51.671 "ddgst": ${ddgst:-false} 00:23:51.671 }, 00:23:51.671 "method": "bdev_nvme_attach_controller" 00:23:51.671 } 00:23:51.671 EOF 00:23:51.671 )") 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:51.671 { 00:23:51.671 "params": { 00:23:51.671 "name": "Nvme$subsystem", 00:23:51.671 "trtype": "$TEST_TRANSPORT", 00:23:51.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.671 "adrfam": "ipv4", 00:23:51.671 "trsvcid": "$NVMF_PORT", 00:23:51.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.671 "hdgst": ${hdgst:-false}, 00:23:51.671 "ddgst": ${ddgst:-false} 00:23:51.671 }, 00:23:51.671 "method": "bdev_nvme_attach_controller" 00:23:51.671 } 00:23:51.671 EOF 00:23:51.671 )") 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:51.671 { 00:23:51.671 "params": { 00:23:51.671 "name": "Nvme$subsystem", 00:23:51.671 "trtype": "$TEST_TRANSPORT", 00:23:51.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.671 "adrfam": "ipv4", 00:23:51.671 "trsvcid": "$NVMF_PORT", 00:23:51.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.671 "hdgst": ${hdgst:-false}, 00:23:51.671 "ddgst": ${ddgst:-false} 00:23:51.671 }, 00:23:51.671 "method": "bdev_nvme_attach_controller" 00:23:51.671 } 00:23:51.671 EOF 00:23:51.671 )") 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:51.671 { 00:23:51.671 "params": { 00:23:51.671 "name": "Nvme$subsystem", 00:23:51.671 "trtype": "$TEST_TRANSPORT", 00:23:51.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.671 "adrfam": "ipv4", 00:23:51.671 "trsvcid": "$NVMF_PORT", 00:23:51.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.671 "hdgst": ${hdgst:-false}, 00:23:51.671 "ddgst": ${ddgst:-false} 00:23:51.671 }, 00:23:51.671 "method": "bdev_nvme_attach_controller" 00:23:51.671 } 00:23:51.671 EOF 00:23:51.671 )") 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:51.671 { 00:23:51.671 "params": { 00:23:51.671 "name": "Nvme$subsystem", 00:23:51.671 "trtype": "$TEST_TRANSPORT", 00:23:51.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.671 "adrfam": "ipv4", 00:23:51.671 "trsvcid": "$NVMF_PORT", 00:23:51.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.671 "hdgst": ${hdgst:-false}, 00:23:51.671 "ddgst": ${ddgst:-false} 00:23:51.671 }, 00:23:51.671 "method": "bdev_nvme_attach_controller" 00:23:51.671 } 00:23:51.671 EOF 00:23:51.671 )") 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:51.671 { 00:23:51.671 "params": { 00:23:51.671 "name": "Nvme$subsystem", 00:23:51.671 "trtype": "$TEST_TRANSPORT", 00:23:51.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.671 "adrfam": "ipv4", 00:23:51.671 "trsvcid": "$NVMF_PORT", 00:23:51.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.671 "hdgst": ${hdgst:-false}, 00:23:51.671 "ddgst": ${ddgst:-false} 00:23:51.671 }, 00:23:51.671 "method": "bdev_nvme_attach_controller" 00:23:51.671 } 00:23:51.671 EOF 00:23:51.671 )") 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:51.671 { 00:23:51.671 "params": { 00:23:51.671 "name": "Nvme$subsystem", 00:23:51.671 "trtype": "$TEST_TRANSPORT", 00:23:51.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.671 "adrfam": "ipv4", 00:23:51.671 "trsvcid": "$NVMF_PORT", 00:23:51.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.671 "hdgst": ${hdgst:-false}, 00:23:51.671 "ddgst": ${ddgst:-false} 00:23:51.671 }, 00:23:51.671 "method": "bdev_nvme_attach_controller" 00:23:51.671 } 00:23:51.671 EOF 00:23:51.671 )") 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:51.671 { 00:23:51.671 "params": { 00:23:51.671 "name": "Nvme$subsystem", 00:23:51.671 "trtype": "$TEST_TRANSPORT", 00:23:51.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.671 "adrfam": "ipv4", 00:23:51.671 "trsvcid": "$NVMF_PORT", 00:23:51.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.671 "hdgst": ${hdgst:-false}, 00:23:51.671 "ddgst": ${ddgst:-false} 00:23:51.671 }, 00:23:51.671 "method": "bdev_nvme_attach_controller" 00:23:51.671 } 00:23:51.671 EOF 00:23:51.671 )") 00:23:51.671 [2024-11-20 08:21:05.675847] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:23:51.671 [2024-11-20 08:21:05.675898] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1756068 ] 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:51.671 { 00:23:51.671 "params": { 00:23:51.671 "name": "Nvme$subsystem", 00:23:51.671 "trtype": "$TEST_TRANSPORT", 00:23:51.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.671 "adrfam": "ipv4", 00:23:51.671 "trsvcid": "$NVMF_PORT", 00:23:51.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.671 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.671 "hdgst": ${hdgst:-false}, 00:23:51.671 "ddgst": ${ddgst:-false} 00:23:51.671 }, 00:23:51.671 "method": "bdev_nvme_attach_controller" 00:23:51.671 } 00:23:51.671 EOF 00:23:51.671 )") 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:51.671 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:51.671 { 00:23:51.671 "params": { 00:23:51.671 "name": "Nvme$subsystem", 00:23:51.671 "trtype": "$TEST_TRANSPORT", 00:23:51.671 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:51.671 "adrfam": "ipv4", 00:23:51.671 "trsvcid": "$NVMF_PORT", 00:23:51.671 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:51.672 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:51.672 "hdgst": ${hdgst:-false}, 00:23:51.672 "ddgst": ${ddgst:-false} 00:23:51.672 }, 00:23:51.672 "method": "bdev_nvme_attach_controller" 00:23:51.672 } 00:23:51.672 EOF 00:23:51.672 )") 00:23:51.672 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:23:51.930 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # jq . 00:23:51.930 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@397 -- # IFS=, 00:23:51.930 08:21:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:23:51.930 "params": { 00:23:51.930 "name": "Nvme1", 00:23:51.930 "trtype": "tcp", 00:23:51.930 "traddr": "10.0.0.2", 00:23:51.930 "adrfam": "ipv4", 00:23:51.930 "trsvcid": "4420", 00:23:51.930 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:51.930 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:51.930 "hdgst": false, 00:23:51.930 "ddgst": false 00:23:51.930 }, 00:23:51.930 "method": "bdev_nvme_attach_controller" 00:23:51.930 },{ 00:23:51.930 "params": { 00:23:51.930 "name": "Nvme2", 00:23:51.930 "trtype": "tcp", 00:23:51.930 "traddr": "10.0.0.2", 00:23:51.930 "adrfam": "ipv4", 00:23:51.930 "trsvcid": "4420", 00:23:51.930 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:51.930 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:51.930 "hdgst": false, 00:23:51.930 "ddgst": false 00:23:51.930 }, 00:23:51.930 "method": "bdev_nvme_attach_controller" 00:23:51.930 },{ 00:23:51.930 "params": { 00:23:51.930 "name": "Nvme3", 00:23:51.930 "trtype": "tcp", 00:23:51.930 "traddr": "10.0.0.2", 00:23:51.930 "adrfam": "ipv4", 00:23:51.930 "trsvcid": "4420", 00:23:51.930 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:51.930 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:51.930 "hdgst": false, 00:23:51.930 "ddgst": false 00:23:51.930 }, 00:23:51.930 "method": "bdev_nvme_attach_controller" 00:23:51.930 },{ 00:23:51.930 "params": { 00:23:51.930 "name": "Nvme4", 00:23:51.930 "trtype": "tcp", 00:23:51.930 "traddr": "10.0.0.2", 00:23:51.930 "adrfam": "ipv4", 00:23:51.930 "trsvcid": "4420", 00:23:51.930 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:51.930 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:51.930 "hdgst": false, 00:23:51.930 "ddgst": false 00:23:51.930 }, 00:23:51.930 "method": "bdev_nvme_attach_controller" 00:23:51.930 },{ 00:23:51.930 "params": { 00:23:51.930 "name": "Nvme5", 00:23:51.930 "trtype": "tcp", 00:23:51.930 "traddr": "10.0.0.2", 00:23:51.930 "adrfam": "ipv4", 00:23:51.930 "trsvcid": "4420", 00:23:51.930 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:51.930 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:51.930 "hdgst": false, 00:23:51.930 "ddgst": false 00:23:51.930 }, 00:23:51.930 "method": "bdev_nvme_attach_controller" 00:23:51.930 },{ 00:23:51.930 "params": { 00:23:51.930 "name": "Nvme6", 00:23:51.930 "trtype": "tcp", 00:23:51.930 "traddr": "10.0.0.2", 00:23:51.930 "adrfam": "ipv4", 00:23:51.930 "trsvcid": "4420", 00:23:51.930 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:51.930 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:51.930 "hdgst": false, 00:23:51.930 "ddgst": false 00:23:51.930 }, 00:23:51.930 "method": "bdev_nvme_attach_controller" 00:23:51.930 },{ 00:23:51.931 "params": { 00:23:51.931 "name": "Nvme7", 00:23:51.931 "trtype": "tcp", 00:23:51.931 "traddr": "10.0.0.2", 00:23:51.931 "adrfam": "ipv4", 00:23:51.931 "trsvcid": "4420", 00:23:51.931 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:51.931 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:51.931 "hdgst": false, 00:23:51.931 "ddgst": false 00:23:51.931 }, 00:23:51.931 "method": "bdev_nvme_attach_controller" 00:23:51.931 },{ 00:23:51.931 "params": { 00:23:51.931 "name": "Nvme8", 00:23:51.931 "trtype": "tcp", 00:23:51.931 "traddr": "10.0.0.2", 00:23:51.931 "adrfam": "ipv4", 00:23:51.931 "trsvcid": "4420", 00:23:51.931 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:51.931 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:51.931 "hdgst": false, 00:23:51.931 "ddgst": false 00:23:51.931 }, 00:23:51.931 "method": "bdev_nvme_attach_controller" 00:23:51.931 },{ 00:23:51.931 "params": { 00:23:51.931 "name": "Nvme9", 00:23:51.931 "trtype": "tcp", 00:23:51.931 "traddr": "10.0.0.2", 00:23:51.931 "adrfam": "ipv4", 00:23:51.931 "trsvcid": "4420", 00:23:51.931 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:51.931 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:51.931 "hdgst": false, 00:23:51.931 "ddgst": false 00:23:51.931 }, 00:23:51.931 "method": "bdev_nvme_attach_controller" 00:23:51.931 },{ 00:23:51.931 "params": { 00:23:51.931 "name": "Nvme10", 00:23:51.931 "trtype": "tcp", 00:23:51.931 "traddr": "10.0.0.2", 00:23:51.931 "adrfam": "ipv4", 00:23:51.931 "trsvcid": "4420", 00:23:51.931 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:51.931 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:51.931 "hdgst": false, 00:23:51.931 "ddgst": false 00:23:51.931 }, 00:23:51.931 "method": "bdev_nvme_attach_controller" 00:23:51.931 }' 00:23:51.931 [2024-11-20 08:21:05.754775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.931 [2024-11-20 08:21:05.795816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.308 Running I/O for 1 seconds... 00:23:54.502 2212.00 IOPS, 138.25 MiB/s 00:23:54.502 Latency(us) 00:23:54.502 [2024-11-20T07:21:08.530Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.502 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.503 Verification LBA range: start 0x0 length 0x400 00:23:54.503 Nvme1n1 : 1.06 240.86 15.05 0.00 0.00 263068.53 16727.28 221698.93 00:23:54.503 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.503 Verification LBA range: start 0x0 length 0x400 00:23:54.503 Nvme2n1 : 1.07 239.52 14.97 0.00 0.00 260434.41 27962.03 216705.71 00:23:54.503 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.503 Verification LBA range: start 0x0 length 0x400 00:23:54.503 Nvme3n1 : 1.12 293.72 18.36 0.00 0.00 208369.56 5960.66 216705.71 00:23:54.503 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.503 Verification LBA range: start 0x0 length 0x400 00:23:54.503 Nvme4n1 : 1.13 286.36 17.90 0.00 0.00 211785.68 2559.02 216705.71 00:23:54.503 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.503 Verification LBA range: start 0x0 length 0x400 00:23:54.503 Nvme5n1 : 1.12 288.04 18.00 0.00 0.00 206519.21 9487.12 202724.69 00:23:54.503 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.503 Verification LBA range: start 0x0 length 0x400 00:23:54.503 Nvme6n1 : 1.14 285.99 17.87 0.00 0.00 205980.91 1888.06 235679.94 00:23:54.503 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.503 Verification LBA range: start 0x0 length 0x400 00:23:54.503 Nvme7n1 : 1.13 283.38 17.71 0.00 0.00 204886.70 13356.86 226692.14 00:23:54.503 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.503 Verification LBA range: start 0x0 length 0x400 00:23:54.503 Nvme8n1 : 1.12 285.06 17.82 0.00 0.00 200220.38 13793.77 208716.56 00:23:54.503 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.503 Verification LBA range: start 0x0 length 0x400 00:23:54.503 Nvme9n1 : 1.14 283.61 17.73 0.00 0.00 198698.33 2652.65 221698.93 00:23:54.503 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:23:54.503 Verification LBA range: start 0x0 length 0x400 00:23:54.503 Nvme10n1 : 1.15 282.96 17.68 0.00 0.00 196394.75 12545.46 235679.94 00:23:54.503 [2024-11-20T07:21:08.531Z] =================================================================================================================== 00:23:54.503 [2024-11-20T07:21:08.531Z] Total : 2769.49 173.09 0.00 0.00 213631.56 1888.06 235679.94 00:23:54.503 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:23:54.503 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:23:54.503 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:23:54.503 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:54.503 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:23:54.503 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:54.503 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@99 -- # sync 00:23:54.503 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:54.503 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # set +e 00:23:54.503 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:54.503 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:54.503 rmmod nvme_tcp 00:23:54.761 rmmod nvme_fabrics 00:23:54.761 rmmod nvme_keyring 00:23:54.761 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:54.761 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # set -e 00:23:54.761 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # return 0 00:23:54.761 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # '[' -n 1755094 ']' 00:23:54.761 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@337 -- # killprocess 1755094 00:23:54.761 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 1755094 ']' 00:23:54.761 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 1755094 00:23:54.761 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:23:54.761 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:54.761 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1755094 00:23:54.761 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:54.761 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:54.761 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1755094' 00:23:54.761 killing process with pid 1755094 00:23:54.761 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 1755094 00:23:54.761 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 1755094 00:23:55.020 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:55.020 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # nvmf_fini 00:23:55.020 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@254 -- # local dev 00:23:55.020 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@257 -- # remove_target_ns 00:23:55.020 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:55.020 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:55.020 08:21:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:57.561 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@258 -- # delete_main_bridge 00:23:57.561 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:57.561 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@121 -- # return 0 00:23:57.561 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:57.561 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:23:57.561 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:23:57.561 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@41 -- # _dev=0 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@41 -- # dev_map=() 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@274 -- # iptr 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@548 -- # iptables-save 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@548 -- # iptables-restore 00:23:57.562 00:23:57.562 real 0m15.967s 00:23:57.562 user 0m36.518s 00:23:57.562 sys 0m5.805s 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:23:57.562 ************************************ 00:23:57.562 END TEST nvmf_shutdown_tc1 00:23:57.562 ************************************ 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:23:57.562 ************************************ 00:23:57.562 START TEST nvmf_shutdown_tc2 00:23:57.562 ************************************ 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # remove_target_ns 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # xtrace_disable 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@131 -- # pci_devs=() 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@131 -- # local -a pci_devs 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@133 -- # pci_drivers=() 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@135 -- # net_devs=() 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@135 -- # local -ga net_devs 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@136 -- # e810=() 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@136 -- # local -ga e810 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@137 -- # x722=() 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@137 -- # local -ga x722 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@138 -- # mlx=() 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@138 -- # local -ga mlx 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:57.562 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:57.562 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:57.562 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:57.563 Found net devices under 0000:86:00.0: cvl_0_0 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:57.563 Found net devices under 0000:86:00.1: cvl_0_1 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # is_hw=yes 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@247 -- # create_target_ns 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@28 -- # local -g _dev 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@44 -- # ips=() 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@11 -- # local val=167772161 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:23:57.563 10.0.0.1 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@11 -- # local val=167772162 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:23:57.563 10.0.0.2 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:23:57.563 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@38 -- # ping_ips 1 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:23:57.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:57.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.468 ms 00:23:57.564 00:23:57.564 --- 10.0.0.1 ping statistics --- 00:23:57.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.564 rtt min/avg/max/mdev = 0.468/0.468/0.468/0.000 ms 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=target0 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:23:57.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:57.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:23:57.564 00:23:57.564 --- 10.0.0.2 ping statistics --- 00:23:57.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:57.564 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # (( pair++ )) 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # return 0 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=initiator1 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # return 1 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev= 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@160 -- # return 0 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:57.564 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=target0 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # get_net_dev target1 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # local dev=target1 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # return 1 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@159 -- # dev= 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@160 -- # return 0 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:23:57.565 ' 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # nvmfpid=1757439 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # waitforlisten 1757439 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1757439 ']' 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.565 08:21:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:57.824 [2024-11-20 08:21:11.614132] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:23:57.824 [2024-11-20 08:21:11.614182] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.824 [2024-11-20 08:21:11.698298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:57.824 [2024-11-20 08:21:11.748540] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.824 [2024-11-20 08:21:11.748585] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.824 [2024-11-20 08:21:11.748595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.825 [2024-11-20 08:21:11.748604] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.825 [2024-11-20 08:21:11.748611] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.825 [2024-11-20 08:21:11.750358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:57.825 [2024-11-20 08:21:11.750468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:57.825 [2024-11-20 08:21:11.750581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.825 [2024-11-20 08:21:11.750581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:23:58.763 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:58.763 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:23:58.763 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:58.763 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:58.763 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:58.763 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:58.763 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:58.763 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.763 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:58.763 [2024-11-20 08:21:12.485056] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:58.763 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.763 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:23:58.763 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:23:58.763 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.764 08:21:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:58.764 Malloc1 00:23:58.764 [2024-11-20 08:21:12.607183] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:58.764 Malloc2 00:23:58.764 Malloc3 00:23:58.764 Malloc4 00:23:58.764 Malloc5 00:23:59.023 Malloc6 00:23:59.023 Malloc7 00:23:59.023 Malloc8 00:23:59.023 Malloc9 00:23:59.023 Malloc10 00:23:59.023 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.023 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:23:59.023 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:59.023 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:59.023 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=1757740 00:23:59.023 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 1757740 /var/tmp/bdevperf.sock 00:23:59.024 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1757740 ']' 00:23:59.024 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:59.024 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:23:59.024 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:23:59.024 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:59.024 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:59.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:59.024 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # config=() 00:23:59.024 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:59.024 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # local subsystem config 00:23:59.024 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:23:59.024 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:59.024 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:59.024 { 00:23:59.024 "params": { 00:23:59.024 "name": "Nvme$subsystem", 00:23:59.024 "trtype": "$TEST_TRANSPORT", 00:23:59.024 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.024 "adrfam": "ipv4", 00:23:59.024 "trsvcid": "$NVMF_PORT", 00:23:59.024 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.024 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.024 "hdgst": ${hdgst:-false}, 00:23:59.024 "ddgst": ${ddgst:-false} 00:23:59.024 }, 00:23:59.024 "method": "bdev_nvme_attach_controller" 00:23:59.024 } 00:23:59.024 EOF 00:23:59.024 )") 00:23:59.283 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:23:59.283 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:59.283 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:59.283 { 00:23:59.283 "params": { 00:23:59.283 "name": "Nvme$subsystem", 00:23:59.283 "trtype": "$TEST_TRANSPORT", 00:23:59.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.283 "adrfam": "ipv4", 00:23:59.283 "trsvcid": "$NVMF_PORT", 00:23:59.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.283 "hdgst": ${hdgst:-false}, 00:23:59.283 "ddgst": ${ddgst:-false} 00:23:59.283 }, 00:23:59.283 "method": "bdev_nvme_attach_controller" 00:23:59.283 } 00:23:59.283 EOF 00:23:59.283 )") 00:23:59.283 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:23:59.283 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:59.283 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:59.283 { 00:23:59.283 "params": { 00:23:59.283 "name": "Nvme$subsystem", 00:23:59.283 "trtype": "$TEST_TRANSPORT", 00:23:59.283 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.283 "adrfam": "ipv4", 00:23:59.283 "trsvcid": "$NVMF_PORT", 00:23:59.283 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.283 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.283 "hdgst": ${hdgst:-false}, 00:23:59.283 "ddgst": ${ddgst:-false} 00:23:59.283 }, 00:23:59.283 "method": "bdev_nvme_attach_controller" 00:23:59.283 } 00:23:59.283 EOF 00:23:59.283 )") 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:59.284 { 00:23:59.284 "params": { 00:23:59.284 "name": "Nvme$subsystem", 00:23:59.284 "trtype": "$TEST_TRANSPORT", 00:23:59.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.284 "adrfam": "ipv4", 00:23:59.284 "trsvcid": "$NVMF_PORT", 00:23:59.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.284 "hdgst": ${hdgst:-false}, 00:23:59.284 "ddgst": ${ddgst:-false} 00:23:59.284 }, 00:23:59.284 "method": "bdev_nvme_attach_controller" 00:23:59.284 } 00:23:59.284 EOF 00:23:59.284 )") 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:59.284 { 00:23:59.284 "params": { 00:23:59.284 "name": "Nvme$subsystem", 00:23:59.284 "trtype": "$TEST_TRANSPORT", 00:23:59.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.284 "adrfam": "ipv4", 00:23:59.284 "trsvcid": "$NVMF_PORT", 00:23:59.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.284 "hdgst": ${hdgst:-false}, 00:23:59.284 "ddgst": ${ddgst:-false} 00:23:59.284 }, 00:23:59.284 "method": "bdev_nvme_attach_controller" 00:23:59.284 } 00:23:59.284 EOF 00:23:59.284 )") 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:59.284 { 00:23:59.284 "params": { 00:23:59.284 "name": "Nvme$subsystem", 00:23:59.284 "trtype": "$TEST_TRANSPORT", 00:23:59.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.284 "adrfam": "ipv4", 00:23:59.284 "trsvcid": "$NVMF_PORT", 00:23:59.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.284 "hdgst": ${hdgst:-false}, 00:23:59.284 "ddgst": ${ddgst:-false} 00:23:59.284 }, 00:23:59.284 "method": "bdev_nvme_attach_controller" 00:23:59.284 } 00:23:59.284 EOF 00:23:59.284 )") 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:59.284 { 00:23:59.284 "params": { 00:23:59.284 "name": "Nvme$subsystem", 00:23:59.284 "trtype": "$TEST_TRANSPORT", 00:23:59.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.284 "adrfam": "ipv4", 00:23:59.284 "trsvcid": "$NVMF_PORT", 00:23:59.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.284 "hdgst": ${hdgst:-false}, 00:23:59.284 "ddgst": ${ddgst:-false} 00:23:59.284 }, 00:23:59.284 "method": "bdev_nvme_attach_controller" 00:23:59.284 } 00:23:59.284 EOF 00:23:59.284 )") 00:23:59.284 [2024-11-20 08:21:13.088649] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:23:59.284 [2024-11-20 08:21:13.088700] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1757740 ] 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:59.284 { 00:23:59.284 "params": { 00:23:59.284 "name": "Nvme$subsystem", 00:23:59.284 "trtype": "$TEST_TRANSPORT", 00:23:59.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.284 "adrfam": "ipv4", 00:23:59.284 "trsvcid": "$NVMF_PORT", 00:23:59.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.284 "hdgst": ${hdgst:-false}, 00:23:59.284 "ddgst": ${ddgst:-false} 00:23:59.284 }, 00:23:59.284 "method": "bdev_nvme_attach_controller" 00:23:59.284 } 00:23:59.284 EOF 00:23:59.284 )") 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:59.284 { 00:23:59.284 "params": { 00:23:59.284 "name": "Nvme$subsystem", 00:23:59.284 "trtype": "$TEST_TRANSPORT", 00:23:59.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.284 "adrfam": "ipv4", 00:23:59.284 "trsvcid": "$NVMF_PORT", 00:23:59.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.284 "hdgst": ${hdgst:-false}, 00:23:59.284 "ddgst": ${ddgst:-false} 00:23:59.284 }, 00:23:59.284 "method": "bdev_nvme_attach_controller" 00:23:59.284 } 00:23:59.284 EOF 00:23:59.284 )") 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:23:59.284 { 00:23:59.284 "params": { 00:23:59.284 "name": "Nvme$subsystem", 00:23:59.284 "trtype": "$TEST_TRANSPORT", 00:23:59.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:23:59.284 "adrfam": "ipv4", 00:23:59.284 "trsvcid": "$NVMF_PORT", 00:23:59.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:23:59.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:23:59.284 "hdgst": ${hdgst:-false}, 00:23:59.284 "ddgst": ${ddgst:-false} 00:23:59.284 }, 00:23:59.284 "method": "bdev_nvme_attach_controller" 00:23:59.284 } 00:23:59.284 EOF 00:23:59.284 )") 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # jq . 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@397 -- # IFS=, 00:23:59.284 08:21:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:23:59.284 "params": { 00:23:59.284 "name": "Nvme1", 00:23:59.284 "trtype": "tcp", 00:23:59.284 "traddr": "10.0.0.2", 00:23:59.284 "adrfam": "ipv4", 00:23:59.284 "trsvcid": "4420", 00:23:59.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:59.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:59.284 "hdgst": false, 00:23:59.284 "ddgst": false 00:23:59.284 }, 00:23:59.284 "method": "bdev_nvme_attach_controller" 00:23:59.284 },{ 00:23:59.284 "params": { 00:23:59.284 "name": "Nvme2", 00:23:59.284 "trtype": "tcp", 00:23:59.284 "traddr": "10.0.0.2", 00:23:59.284 "adrfam": "ipv4", 00:23:59.284 "trsvcid": "4420", 00:23:59.284 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:59.284 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:59.284 "hdgst": false, 00:23:59.284 "ddgst": false 00:23:59.284 }, 00:23:59.284 "method": "bdev_nvme_attach_controller" 00:23:59.284 },{ 00:23:59.284 "params": { 00:23:59.284 "name": "Nvme3", 00:23:59.284 "trtype": "tcp", 00:23:59.284 "traddr": "10.0.0.2", 00:23:59.284 "adrfam": "ipv4", 00:23:59.284 "trsvcid": "4420", 00:23:59.284 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:23:59.284 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:23:59.284 "hdgst": false, 00:23:59.284 "ddgst": false 00:23:59.284 }, 00:23:59.284 "method": "bdev_nvme_attach_controller" 00:23:59.284 },{ 00:23:59.284 "params": { 00:23:59.284 "name": "Nvme4", 00:23:59.284 "trtype": "tcp", 00:23:59.284 "traddr": "10.0.0.2", 00:23:59.284 "adrfam": "ipv4", 00:23:59.284 "trsvcid": "4420", 00:23:59.284 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:23:59.284 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:23:59.284 "hdgst": false, 00:23:59.284 "ddgst": false 00:23:59.284 }, 00:23:59.284 "method": "bdev_nvme_attach_controller" 00:23:59.284 },{ 00:23:59.284 "params": { 00:23:59.284 "name": "Nvme5", 00:23:59.284 "trtype": "tcp", 00:23:59.284 "traddr": "10.0.0.2", 00:23:59.284 "adrfam": "ipv4", 00:23:59.284 "trsvcid": "4420", 00:23:59.284 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:23:59.284 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:23:59.284 "hdgst": false, 00:23:59.284 "ddgst": false 00:23:59.284 }, 00:23:59.285 "method": "bdev_nvme_attach_controller" 00:23:59.285 },{ 00:23:59.285 "params": { 00:23:59.285 "name": "Nvme6", 00:23:59.285 "trtype": "tcp", 00:23:59.285 "traddr": "10.0.0.2", 00:23:59.285 "adrfam": "ipv4", 00:23:59.285 "trsvcid": "4420", 00:23:59.285 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:23:59.285 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:23:59.285 "hdgst": false, 00:23:59.285 "ddgst": false 00:23:59.285 }, 00:23:59.285 "method": "bdev_nvme_attach_controller" 00:23:59.285 },{ 00:23:59.285 "params": { 00:23:59.285 "name": "Nvme7", 00:23:59.285 "trtype": "tcp", 00:23:59.285 "traddr": "10.0.0.2", 00:23:59.285 "adrfam": "ipv4", 00:23:59.285 "trsvcid": "4420", 00:23:59.285 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:23:59.285 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:23:59.285 "hdgst": false, 00:23:59.285 "ddgst": false 00:23:59.285 }, 00:23:59.285 "method": "bdev_nvme_attach_controller" 00:23:59.285 },{ 00:23:59.285 "params": { 00:23:59.285 "name": "Nvme8", 00:23:59.285 "trtype": "tcp", 00:23:59.285 "traddr": "10.0.0.2", 00:23:59.285 "adrfam": "ipv4", 00:23:59.285 "trsvcid": "4420", 00:23:59.285 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:23:59.285 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:23:59.285 "hdgst": false, 00:23:59.285 "ddgst": false 00:23:59.285 }, 00:23:59.285 "method": "bdev_nvme_attach_controller" 00:23:59.285 },{ 00:23:59.285 "params": { 00:23:59.285 "name": "Nvme9", 00:23:59.285 "trtype": "tcp", 00:23:59.285 "traddr": "10.0.0.2", 00:23:59.285 "adrfam": "ipv4", 00:23:59.285 "trsvcid": "4420", 00:23:59.285 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:23:59.285 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:23:59.285 "hdgst": false, 00:23:59.285 "ddgst": false 00:23:59.285 }, 00:23:59.285 "method": "bdev_nvme_attach_controller" 00:23:59.285 },{ 00:23:59.285 "params": { 00:23:59.285 "name": "Nvme10", 00:23:59.285 "trtype": "tcp", 00:23:59.285 "traddr": "10.0.0.2", 00:23:59.285 "adrfam": "ipv4", 00:23:59.285 "trsvcid": "4420", 00:23:59.285 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:23:59.285 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:23:59.285 "hdgst": false, 00:23:59.285 "ddgst": false 00:23:59.285 }, 00:23:59.285 "method": "bdev_nvme_attach_controller" 00:23:59.285 }' 00:23:59.285 [2024-11-20 08:21:13.166685] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:59.285 [2024-11-20 08:21:13.207738] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:00.665 Running I/O for 10 seconds... 00:24:01.234 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:01.234 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:24:01.234 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:01.234 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.234 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:01.234 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.234 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:01.234 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:01.234 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:01.234 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:24:01.234 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:24:01.234 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:01.234 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:01.234 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:01.234 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:01.234 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:01.234 08:21:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:01.234 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:01.235 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:24:01.235 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:24:01.235 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:24:01.235 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:24:01.235 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:24:01.235 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 1757740 00:24:01.235 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1757740 ']' 00:24:01.235 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1757740 00:24:01.235 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:24:01.235 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:01.235 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1757740 00:24:01.235 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:01.235 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:01.235 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1757740' 00:24:01.235 killing process with pid 1757740 00:24:01.235 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1757740 00:24:01.235 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1757740 00:24:01.235 Received shutdown signal, test time was about 0.691763 seconds 00:24:01.235 00:24:01.235 Latency(us) 00:24:01.235 [2024-11-20T07:21:15.263Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.235 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.235 Verification LBA range: start 0x0 length 0x400 00:24:01.235 Nvme1n1 : 0.67 285.81 17.86 0.00 0.00 220286.46 14854.83 213709.78 00:24:01.235 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.235 Verification LBA range: start 0x0 length 0x400 00:24:01.235 Nvme2n1 : 0.67 301.47 18.84 0.00 0.00 201557.65 8176.40 203723.34 00:24:01.235 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.235 Verification LBA range: start 0x0 length 0x400 00:24:01.235 Nvme3n1 : 0.67 288.64 18.04 0.00 0.00 207386.98 26838.55 189742.32 00:24:01.235 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.235 Verification LBA range: start 0x0 length 0x400 00:24:01.235 Nvme4n1 : 0.66 289.46 18.09 0.00 0.00 201690.94 16227.96 199728.76 00:24:01.235 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.235 Verification LBA range: start 0x0 length 0x400 00:24:01.235 Nvme5n1 : 0.68 281.90 17.62 0.00 0.00 202973.05 17351.44 212711.13 00:24:01.235 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.235 Verification LBA range: start 0x0 length 0x400 00:24:01.235 Nvme6n1 : 0.69 279.07 17.44 0.00 0.00 200060.67 16352.79 219701.64 00:24:01.235 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.235 Verification LBA range: start 0x0 length 0x400 00:24:01.235 Nvme7n1 : 0.68 280.38 17.52 0.00 0.00 193732.67 16727.28 217704.35 00:24:01.235 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.235 Verification LBA range: start 0x0 length 0x400 00:24:01.235 Nvme8n1 : 0.68 283.28 17.70 0.00 0.00 186289.98 29584.82 189742.32 00:24:01.235 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.235 Verification LBA range: start 0x0 length 0x400 00:24:01.235 Nvme9n1 : 0.69 277.82 17.36 0.00 0.00 185583.58 15978.30 226692.14 00:24:01.235 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:01.235 Verification LBA range: start 0x0 length 0x400 00:24:01.235 Nvme10n1 : 0.66 194.87 12.18 0.00 0.00 252709.06 18849.40 239674.51 00:24:01.235 [2024-11-20T07:21:15.263Z] =================================================================================================================== 00:24:01.235 [2024-11-20T07:21:15.263Z] Total : 2762.70 172.67 0.00 0.00 203578.91 8176.40 239674.51 00:24:01.494 08:21:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 1757439 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@99 -- # sync 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # set +e 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:02.430 rmmod nvme_tcp 00:24:02.430 rmmod nvme_fabrics 00:24:02.430 rmmod nvme_keyring 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # set -e 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # return 0 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # '[' -n 1757439 ']' 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@337 -- # killprocess 1757439 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 1757439 ']' 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 1757439 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:02.430 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1757439 00:24:02.689 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:02.689 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:02.689 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1757439' 00:24:02.689 killing process with pid 1757439 00:24:02.689 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 1757439 00:24:02.689 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 1757439 00:24:02.948 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:02.948 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # nvmf_fini 00:24:02.948 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@254 -- # local dev 00:24:02.948 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:02.948 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:02.948 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:02.948 08:21:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@121 -- # return 0 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@41 -- # _dev=0 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@41 -- # dev_map=() 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@274 -- # iptr 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@548 -- # iptables-save 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@548 -- # iptables-restore 00:24:05.487 00:24:05.487 real 0m7.770s 00:24:05.487 user 0m22.619s 00:24:05.487 sys 0m1.379s 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:05.487 ************************************ 00:24:05.487 END TEST nvmf_shutdown_tc2 00:24:05.487 ************************************ 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:05.487 ************************************ 00:24:05.487 START TEST nvmf_shutdown_tc3 00:24:05.487 ************************************ 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # remove_target_ns 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # xtrace_disable 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@131 -- # pci_devs=() 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@135 -- # net_devs=() 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@136 -- # e810=() 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@136 -- # local -ga e810 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@137 -- # x722=() 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@137 -- # local -ga x722 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@138 -- # mlx=() 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@138 -- # local -ga mlx 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:05.487 08:21:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:05.487 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:05.488 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:05.488 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:05.488 Found net devices under 0000:86:00.0: cvl_0_0 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:05.488 Found net devices under 0000:86:00.1: cvl_0_1 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # is_hw=yes 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@247 -- # create_target_ns 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@28 -- # local -g _dev 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@44 -- # ips=() 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@11 -- # local val=167772161 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:05.488 10.0.0.1 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@11 -- # local val=167772162 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:05.488 10.0.0.2 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:24:05.488 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:05.489 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:05.489 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.485 ms 00:24:05.489 00:24:05.489 --- 10.0.0.1 ping statistics --- 00:24:05.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.489 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=target0 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:24:05.489 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:05.489 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.150 ms 00:24:05.489 00:24:05.489 --- 10.0.0.2 ping statistics --- 00:24:05.489 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:05.489 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # return 0 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # return 1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev= 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@160 -- # return 0 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=target0 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # local dev=target1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # return 1 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@159 -- # dev= 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@160 -- # return 0 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:24:05.489 ' 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # nvmfpid=1758891 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # waitforlisten 1758891 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1758891 ']' 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:05.489 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:05.489 08:21:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:05.489 [2024-11-20 08:21:19.483818] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:24:05.489 [2024-11-20 08:21:19.483865] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:05.748 [2024-11-20 08:21:19.563152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:05.748 [2024-11-20 08:21:19.604295] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:05.748 [2024-11-20 08:21:19.604332] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:05.748 [2024-11-20 08:21:19.604340] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:05.748 [2024-11-20 08:21:19.604346] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:05.748 [2024-11-20 08:21:19.604351] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:05.748 [2024-11-20 08:21:19.605916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:05.748 [2024-11-20 08:21:19.606022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:05.748 [2024-11-20 08:21:19.606129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.748 [2024-11-20 08:21:19.606130] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:06.316 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:06.316 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:24:06.316 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:06.316 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:06.316 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:06.575 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:06.575 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:06.575 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.575 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:06.575 [2024-11-20 08:21:20.375380] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:06.575 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.575 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:06.575 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:06.575 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:06.575 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:06.575 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:06.575 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:06.575 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:06.575 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:06.576 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:06.576 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:06.576 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:06.576 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:06.576 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:06.576 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:06.576 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:06.576 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:06.576 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:06.576 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:06.576 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:06.576 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:06.576 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:06.576 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:06.576 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:06.576 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:06.576 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:24:06.576 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:06.576 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.576 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:06.576 Malloc1 00:24:06.576 [2024-11-20 08:21:20.492233] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:06.576 Malloc2 00:24:06.576 Malloc3 00:24:06.576 Malloc4 00:24:06.835 Malloc5 00:24:06.835 Malloc6 00:24:06.835 Malloc7 00:24:06.835 Malloc8 00:24:06.835 Malloc9 00:24:07.095 Malloc10 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=1759177 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 1759177 /var/tmp/bdevperf.sock 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 1759177 ']' 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:07.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # config=() 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # local subsystem config 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:07.095 { 00:24:07.095 "params": { 00:24:07.095 "name": "Nvme$subsystem", 00:24:07.095 "trtype": "$TEST_TRANSPORT", 00:24:07.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.095 "adrfam": "ipv4", 00:24:07.095 "trsvcid": "$NVMF_PORT", 00:24:07.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.095 "hdgst": ${hdgst:-false}, 00:24:07.095 "ddgst": ${ddgst:-false} 00:24:07.095 }, 00:24:07.095 "method": "bdev_nvme_attach_controller" 00:24:07.095 } 00:24:07.095 EOF 00:24:07.095 )") 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:07.095 { 00:24:07.095 "params": { 00:24:07.095 "name": "Nvme$subsystem", 00:24:07.095 "trtype": "$TEST_TRANSPORT", 00:24:07.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.095 "adrfam": "ipv4", 00:24:07.095 "trsvcid": "$NVMF_PORT", 00:24:07.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.095 "hdgst": ${hdgst:-false}, 00:24:07.095 "ddgst": ${ddgst:-false} 00:24:07.095 }, 00:24:07.095 "method": "bdev_nvme_attach_controller" 00:24:07.095 } 00:24:07.095 EOF 00:24:07.095 )") 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:07.095 { 00:24:07.095 "params": { 00:24:07.095 "name": "Nvme$subsystem", 00:24:07.095 "trtype": "$TEST_TRANSPORT", 00:24:07.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.095 "adrfam": "ipv4", 00:24:07.095 "trsvcid": "$NVMF_PORT", 00:24:07.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.095 "hdgst": ${hdgst:-false}, 00:24:07.095 "ddgst": ${ddgst:-false} 00:24:07.095 }, 00:24:07.095 "method": "bdev_nvme_attach_controller" 00:24:07.095 } 00:24:07.095 EOF 00:24:07.095 )") 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:07.095 { 00:24:07.095 "params": { 00:24:07.095 "name": "Nvme$subsystem", 00:24:07.095 "trtype": "$TEST_TRANSPORT", 00:24:07.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.095 "adrfam": "ipv4", 00:24:07.095 "trsvcid": "$NVMF_PORT", 00:24:07.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.095 "hdgst": ${hdgst:-false}, 00:24:07.095 "ddgst": ${ddgst:-false} 00:24:07.095 }, 00:24:07.095 "method": "bdev_nvme_attach_controller" 00:24:07.095 } 00:24:07.095 EOF 00:24:07.095 )") 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:07.095 { 00:24:07.095 "params": { 00:24:07.095 "name": "Nvme$subsystem", 00:24:07.095 "trtype": "$TEST_TRANSPORT", 00:24:07.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.095 "adrfam": "ipv4", 00:24:07.095 "trsvcid": "$NVMF_PORT", 00:24:07.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.095 "hdgst": ${hdgst:-false}, 00:24:07.095 "ddgst": ${ddgst:-false} 00:24:07.095 }, 00:24:07.095 "method": "bdev_nvme_attach_controller" 00:24:07.095 } 00:24:07.095 EOF 00:24:07.095 )") 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:07.095 { 00:24:07.095 "params": { 00:24:07.095 "name": "Nvme$subsystem", 00:24:07.095 "trtype": "$TEST_TRANSPORT", 00:24:07.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.095 "adrfam": "ipv4", 00:24:07.095 "trsvcid": "$NVMF_PORT", 00:24:07.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.095 "hdgst": ${hdgst:-false}, 00:24:07.095 "ddgst": ${ddgst:-false} 00:24:07.095 }, 00:24:07.095 "method": "bdev_nvme_attach_controller" 00:24:07.095 } 00:24:07.095 EOF 00:24:07.095 )") 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:07.095 { 00:24:07.095 "params": { 00:24:07.095 "name": "Nvme$subsystem", 00:24:07.095 "trtype": "$TEST_TRANSPORT", 00:24:07.095 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.095 "adrfam": "ipv4", 00:24:07.095 "trsvcid": "$NVMF_PORT", 00:24:07.095 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.095 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.095 "hdgst": ${hdgst:-false}, 00:24:07.095 "ddgst": ${ddgst:-false} 00:24:07.095 }, 00:24:07.095 "method": "bdev_nvme_attach_controller" 00:24:07.095 } 00:24:07.095 EOF 00:24:07.095 )") 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:24:07.095 [2024-11-20 08:21:20.969005] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:24:07.095 [2024-11-20 08:21:20.969050] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1759177 ] 00:24:07.095 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:07.096 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:07.096 { 00:24:07.096 "params": { 00:24:07.096 "name": "Nvme$subsystem", 00:24:07.096 "trtype": "$TEST_TRANSPORT", 00:24:07.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.096 "adrfam": "ipv4", 00:24:07.096 "trsvcid": "$NVMF_PORT", 00:24:07.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.096 "hdgst": ${hdgst:-false}, 00:24:07.096 "ddgst": ${ddgst:-false} 00:24:07.096 }, 00:24:07.096 "method": "bdev_nvme_attach_controller" 00:24:07.096 } 00:24:07.096 EOF 00:24:07.096 )") 00:24:07.096 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:24:07.096 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:07.096 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:07.096 { 00:24:07.096 "params": { 00:24:07.096 "name": "Nvme$subsystem", 00:24:07.096 "trtype": "$TEST_TRANSPORT", 00:24:07.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.096 "adrfam": "ipv4", 00:24:07.096 "trsvcid": "$NVMF_PORT", 00:24:07.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.096 "hdgst": ${hdgst:-false}, 00:24:07.096 "ddgst": ${ddgst:-false} 00:24:07.096 }, 00:24:07.096 "method": "bdev_nvme_attach_controller" 00:24:07.096 } 00:24:07.096 EOF 00:24:07.096 )") 00:24:07.096 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:24:07.096 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:24:07.096 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:24:07.096 { 00:24:07.096 "params": { 00:24:07.096 "name": "Nvme$subsystem", 00:24:07.096 "trtype": "$TEST_TRANSPORT", 00:24:07.096 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:07.096 "adrfam": "ipv4", 00:24:07.096 "trsvcid": "$NVMF_PORT", 00:24:07.096 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:07.096 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:07.096 "hdgst": ${hdgst:-false}, 00:24:07.096 "ddgst": ${ddgst:-false} 00:24:07.096 }, 00:24:07.096 "method": "bdev_nvme_attach_controller" 00:24:07.096 } 00:24:07.096 EOF 00:24:07.096 )") 00:24:07.096 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:24:07.096 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # jq . 00:24:07.096 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@397 -- # IFS=, 00:24:07.096 08:21:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:24:07.096 "params": { 00:24:07.096 "name": "Nvme1", 00:24:07.096 "trtype": "tcp", 00:24:07.096 "traddr": "10.0.0.2", 00:24:07.096 "adrfam": "ipv4", 00:24:07.096 "trsvcid": "4420", 00:24:07.096 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:07.096 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:07.096 "hdgst": false, 00:24:07.096 "ddgst": false 00:24:07.096 }, 00:24:07.096 "method": "bdev_nvme_attach_controller" 00:24:07.096 },{ 00:24:07.096 "params": { 00:24:07.096 "name": "Nvme2", 00:24:07.096 "trtype": "tcp", 00:24:07.096 "traddr": "10.0.0.2", 00:24:07.096 "adrfam": "ipv4", 00:24:07.096 "trsvcid": "4420", 00:24:07.096 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:07.096 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:24:07.096 "hdgst": false, 00:24:07.096 "ddgst": false 00:24:07.096 }, 00:24:07.096 "method": "bdev_nvme_attach_controller" 00:24:07.096 },{ 00:24:07.096 "params": { 00:24:07.096 "name": "Nvme3", 00:24:07.096 "trtype": "tcp", 00:24:07.096 "traddr": "10.0.0.2", 00:24:07.096 "adrfam": "ipv4", 00:24:07.096 "trsvcid": "4420", 00:24:07.096 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:24:07.096 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:24:07.096 "hdgst": false, 00:24:07.096 "ddgst": false 00:24:07.096 }, 00:24:07.096 "method": "bdev_nvme_attach_controller" 00:24:07.096 },{ 00:24:07.096 "params": { 00:24:07.096 "name": "Nvme4", 00:24:07.096 "trtype": "tcp", 00:24:07.096 "traddr": "10.0.0.2", 00:24:07.096 "adrfam": "ipv4", 00:24:07.096 "trsvcid": "4420", 00:24:07.096 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:24:07.096 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:24:07.096 "hdgst": false, 00:24:07.096 "ddgst": false 00:24:07.096 }, 00:24:07.096 "method": "bdev_nvme_attach_controller" 00:24:07.096 },{ 00:24:07.096 "params": { 00:24:07.096 "name": "Nvme5", 00:24:07.096 "trtype": "tcp", 00:24:07.096 "traddr": "10.0.0.2", 00:24:07.096 "adrfam": "ipv4", 00:24:07.096 "trsvcid": "4420", 00:24:07.096 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:24:07.096 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:24:07.096 "hdgst": false, 00:24:07.096 "ddgst": false 00:24:07.096 }, 00:24:07.096 "method": "bdev_nvme_attach_controller" 00:24:07.096 },{ 00:24:07.096 "params": { 00:24:07.096 "name": "Nvme6", 00:24:07.096 "trtype": "tcp", 00:24:07.096 "traddr": "10.0.0.2", 00:24:07.096 "adrfam": "ipv4", 00:24:07.096 "trsvcid": "4420", 00:24:07.096 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:24:07.096 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:24:07.096 "hdgst": false, 00:24:07.096 "ddgst": false 00:24:07.096 }, 00:24:07.096 "method": "bdev_nvme_attach_controller" 00:24:07.096 },{ 00:24:07.096 "params": { 00:24:07.096 "name": "Nvme7", 00:24:07.096 "trtype": "tcp", 00:24:07.096 "traddr": "10.0.0.2", 00:24:07.096 "adrfam": "ipv4", 00:24:07.096 "trsvcid": "4420", 00:24:07.096 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:24:07.096 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:24:07.096 "hdgst": false, 00:24:07.096 "ddgst": false 00:24:07.096 }, 00:24:07.096 "method": "bdev_nvme_attach_controller" 00:24:07.096 },{ 00:24:07.096 "params": { 00:24:07.096 "name": "Nvme8", 00:24:07.096 "trtype": "tcp", 00:24:07.096 "traddr": "10.0.0.2", 00:24:07.096 "adrfam": "ipv4", 00:24:07.096 "trsvcid": "4420", 00:24:07.096 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:24:07.096 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:24:07.096 "hdgst": false, 00:24:07.096 "ddgst": false 00:24:07.096 }, 00:24:07.096 "method": "bdev_nvme_attach_controller" 00:24:07.096 },{ 00:24:07.096 "params": { 00:24:07.096 "name": "Nvme9", 00:24:07.096 "trtype": "tcp", 00:24:07.096 "traddr": "10.0.0.2", 00:24:07.096 "adrfam": "ipv4", 00:24:07.096 "trsvcid": "4420", 00:24:07.096 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:24:07.096 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:24:07.096 "hdgst": false, 00:24:07.096 "ddgst": false 00:24:07.096 }, 00:24:07.096 "method": "bdev_nvme_attach_controller" 00:24:07.096 },{ 00:24:07.096 "params": { 00:24:07.096 "name": "Nvme10", 00:24:07.096 "trtype": "tcp", 00:24:07.096 "traddr": "10.0.0.2", 00:24:07.096 "adrfam": "ipv4", 00:24:07.096 "trsvcid": "4420", 00:24:07.096 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:24:07.096 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:24:07.096 "hdgst": false, 00:24:07.096 "ddgst": false 00:24:07.096 }, 00:24:07.096 "method": "bdev_nvme_attach_controller" 00:24:07.096 }' 00:24:07.096 [2024-11-20 08:21:21.044803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:07.096 [2024-11-20 08:21:21.085249] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:08.475 Running I/O for 10 seconds... 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=84 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 84 -ge 100 ']' 00:24:09.044 08:21:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=195 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 1758891 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1758891 ']' 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1758891 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1758891 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1758891' 00:24:09.312 killing process with pid 1758891 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 1758891 00:24:09.312 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 1758891 00:24:09.312 [2024-11-20 08:21:23.286094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286216] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286222] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286301] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286325] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286349] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286381] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.312 [2024-11-20 08:21:23.286393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286427] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286494] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286506] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.286534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231700 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.294100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a9290 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.294135] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a9290 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.294143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a9290 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.294150] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a9290 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.294156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a9290 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.294167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13a9290 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296164] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296198] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296284] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296298] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296304] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296408] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.313 [2024-11-20 08:21:23.296507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.314 [2024-11-20 08:21:23.296515] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.314 [2024-11-20 08:21:23.296521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.314 [2024-11-20 08:21:23.296527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.314 [2024-11-20 08:21:23.296533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.314 [2024-11-20 08:21:23.296539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.314 [2024-11-20 08:21:23.296544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.314 [2024-11-20 08:21:23.296551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.314 [2024-11-20 08:21:23.296558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.314 [2024-11-20 08:21:23.296563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.314 [2024-11-20 08:21:23.296569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.314 [2024-11-20 08:21:23.296575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1231bf0 is same with the state(6) to be set 00:24:09.314 [2024-11-20 08:21:23.297431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.314 [2024-11-20 08:21:23.297872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.314 [2024-11-20 08:21:23.297878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.297886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.297893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.297900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.297907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.297915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.297922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.297930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.297937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.297945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.297953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.297961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.297967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.297976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.297983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.297991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.297998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.315 [2024-11-20 08:21:23.298387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.315 [2024-11-20 08:21:23.298394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.298403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.298409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.298417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.298424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.298554] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.316 [2024-11-20 08:21:23.298568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.298575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.316 [2024-11-20 08:21:23.298582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.298589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.316 [2024-11-20 08:21:23.298597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.298604] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.316 [2024-11-20 08:21:23.298610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.298618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe57620 is same with the state(6) to be set 00:24:09.316 [2024-11-20 08:21:23.298667] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.316 [2024-11-20 08:21:23.298677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.298684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.316 [2024-11-20 08:21:23.298693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.298700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.316 [2024-11-20 08:21:23.298707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.298714] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.316 [2024-11-20 08:21:23.298720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.298726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9d50 is same with the state(6) to be set 00:24:09.316 [2024-11-20 08:21:23.298757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.316 [2024-11-20 08:21:23.298765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.298772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.316 [2024-11-20 08:21:23.298779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.298788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.316 [2024-11-20 08:21:23.298796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.298803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.316 [2024-11-20 08:21:23.298811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.298817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ea1b0 is same with the state(6) to be set 00:24:09.316 [2024-11-20 08:21:23.300535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:24:09.316 [2024-11-20 08:21:23.300567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe57620 (9): Bad file descriptor 00:24:09.316 [2024-11-20 08:21:23.300613] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:09.316 [2024-11-20 08:21:23.300723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.300735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.300747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.300761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.300770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.300777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.300786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.300792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.300803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.300810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.300818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.300825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.300833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.300840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.300847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.300854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.300862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.300868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.300876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.300883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.300891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.300897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.300905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.300912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.300920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.300927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.300936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.300942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.300950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:40960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.300957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.300965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:41088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.300971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.300979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:41216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.300987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.300995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:41344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.301003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.301011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:41472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.301018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.301025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.301033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.301026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12325b0 is same with the state(6) to be set 00:24:09.316 [2024-11-20 08:21:23.301042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.316 [2024-11-20 08:21:23.301049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.316 [2024-11-20 08:21:23.301050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12325b0 is same with the state(6) to be set 00:24:09.317 [2024-11-20 08:21:23.301058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12325b0 is same with the state(6) to be set 00:24:09.317 [2024-11-20 08:21:23.301066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12325b0 is same with the state(6) to be set 00:24:09.317 [2024-11-20 08:21:23.301075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128[2024-11-20 08:21:23.301075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12325b0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 the state(6) to be set 00:24:09.317 [2024-11-20 08:21:23.301084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 08:21:23.301085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12325b0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 the state(6) to be set 00:24:09.317 [2024-11-20 08:21:23.301094] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12325b0 is same with the state(6) to be set 00:24:09.317 [2024-11-20 08:21:23.301096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301100] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12325b0 is same with the state(6) to be set 00:24:09.317 [2024-11-20 08:21:23.301103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301107] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12325b0 is same with the state(6) to be set 00:24:09.317 [2024-11-20 08:21:23.301112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12325b0 is same with the state(6) to be set 00:24:09.317 [2024-11-20 08:21:23.301120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12325b0 is same with the state(6) to be set 00:24:09.317 [2024-11-20 08:21:23.301129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12325b0 is same with the state(6) to be set 00:24:09.317 [2024-11-20 08:21:23.301137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12325b0 is same with the state(6) to be set 00:24:09.317 [2024-11-20 08:21:23.301146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12325b0 is same with [2024-11-20 08:21:23.301146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:12the state(6) to be set 00:24:09.317 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301156] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12325b0 is same with the state(6) to be set 00:24:09.317 [2024-11-20 08:21:23.301157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12325b0 is same with the state(6) to be set 00:24:09.317 [2024-11-20 08:21:23.301167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12325b0 is same with the state(6) to be set 00:24:09.317 [2024-11-20 08:21:23.301174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.317 [2024-11-20 08:21:23.301467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.317 [2024-11-20 08:21:23.301475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.318 [2024-11-20 08:21:23.301481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.318 [2024-11-20 08:21:23.301489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.318 [2024-11-20 08:21:23.301495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.318 [2024-11-20 08:21:23.301502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.318 [2024-11-20 08:21:23.301510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.318 [2024-11-20 08:21:23.301517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.318 [2024-11-20 08:21:23.301524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.318 [2024-11-20 08:21:23.301532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.318 [2024-11-20 08:21:23.301539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.318 [2024-11-20 08:21:23.301546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.318 [2024-11-20 08:21:23.301552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.318 [2024-11-20 08:21:23.301560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.318 [2024-11-20 08:21:23.301566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.318 [2024-11-20 08:21:23.301575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.318 [2024-11-20 08:21:23.301582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.318 [2024-11-20 08:21:23.301591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.318 [2024-11-20 08:21:23.301597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.318 [2024-11-20 08:21:23.301605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.318 [2024-11-20 08:21:23.301611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.318 [2024-11-20 08:21:23.301618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.318 [2024-11-20 08:21:23.301625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.318 [2024-11-20 08:21:23.301635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.318 [2024-11-20 08:21:23.301642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.318 [2024-11-20 08:21:23.301651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.318 [2024-11-20 08:21:23.301646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.318 [2024-11-20 08:21:23.301667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.318 [2024-11-20 08:21:23.301668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.318 [2024-11-20 08:21:23.301677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:38784 len:12[2024-11-20 08:21:23.301684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.318 the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.318 [2024-11-20 08:21:23.301694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.318 [2024-11-20 08:21:23.301708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.318 [2024-11-20 08:21:23.301715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.318 [2024-11-20 08:21:23.301723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.318 [2024-11-20 08:21:23.301730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301796] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301855] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.318 [2024-11-20 08:21:23.301981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.301987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.301993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.302000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.302008] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.302014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.302020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.302025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.302031] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.302037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.302043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.302049] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.302054] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.302061] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.302067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232930 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.302997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303023] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303029] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303090] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303096] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303115] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303134] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303230] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303275] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303282] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303288] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303294] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303306] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303330] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303354] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303402] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1232e00 is same with the state(6) to be set 00:24:09.319 [2024-11-20 08:21:23.303580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:09.319 [2024-11-20 08:21:23.303632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e44b0 (9): Bad file descriptor 00:24:09.319 [2024-11-20 08:21:23.303798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.319 [2024-11-20 08:21:23.303814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe57620 with addr=10.0.0.2, port=4420 00:24:09.319 [2024-11-20 08:21:23.303824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe57620 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe57620 (9): Bad file descriptor 00:24:09.320 [2024-11-20 08:21:23.304369] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:09.320 [2024-11-20 08:21:23.304565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304649] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304655] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304673] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304679] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304740] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304820] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304860] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304867] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304896] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304952] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.304976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12332d0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.305487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.320 [2024-11-20 08:21:23.305510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e44b0 with addr=10.0.0.2, port=4420 00:24:09.320 [2024-11-20 08:21:23.305518] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e44b0 is same with the state(6) to be set 00:24:09.320 [2024-11-20 08:21:23.305527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:24:09.320 [2024-11-20 08:21:23.305534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:24:09.320 [2024-11-20 08:21:23.305542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:24:09.320 [2024-11-20 08:21:23.305550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:24:09.320 [2024-11-20 08:21:23.305730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.320 [2024-11-20 08:21:23.305744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.320 [2024-11-20 08:21:23.305757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.320 [2024-11-20 08:21:23.305764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.320 [2024-11-20 08:21:23.305776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.320 [2024-11-20 08:21:23.305782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.320 [2024-11-20 08:21:23.305791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.321 [2024-11-20 08:21:23.305798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.321 [2024-11-20 08:21:23.305793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.305807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.321 [2024-11-20 08:21:23.305814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 08:21:23.305813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.321 the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.305824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with [2024-11-20 08:21:23.305824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:12the state(6) to be set 00:24:09.321 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.321 [2024-11-20 08:21:23.305833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with [2024-11-20 08:21:23.305834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:09.321 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.321 [2024-11-20 08:21:23.305842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.305845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.321 [2024-11-20 08:21:23.305849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.305853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.321 [2024-11-20 08:21:23.305856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.305862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.321 [2024-11-20 08:21:23.305865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.305869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.321 [2024-11-20 08:21:23.305872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.305878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.321 [2024-11-20 08:21:23.305879] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.305887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with [2024-11-20 08:21:23.305887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:09.321 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.321 [2024-11-20 08:21:23.305895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with [2024-11-20 08:21:23.305898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:1the state(6) to be set 00:24:09.321 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.321 [2024-11-20 08:21:23.305907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with [2024-11-20 08:21:23.305908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:09.321 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.321 [2024-11-20 08:21:23.305916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.305918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.321 [2024-11-20 08:21:23.305923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.305928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.321 [2024-11-20 08:21:23.305930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.305938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with [2024-11-20 08:21:23.305938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:1the state(6) to be set 00:24:09.321 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.321 [2024-11-20 08:21:23.305946] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.305947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.321 [2024-11-20 08:21:23.305954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.305957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.321 [2024-11-20 08:21:23.305960] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.305965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.321 [2024-11-20 08:21:23.305968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.305974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with [2024-11-20 08:21:23.305974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:1the state(6) to be set 00:24:09.321 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.321 [2024-11-20 08:21:23.305983] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.305985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.321 [2024-11-20 08:21:23.305990] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.305995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.321 [2024-11-20 08:21:23.305998] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.306002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.321 [2024-11-20 08:21:23.306005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.306012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.321 [2024-11-20 08:21:23.306014] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.306019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.321 [2024-11-20 08:21:23.306021] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.306028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.306028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.321 [2024-11-20 08:21:23.306034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.306037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.321 [2024-11-20 08:21:23.306041] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.306046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.321 [2024-11-20 08:21:23.306048] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.306055] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with [2024-11-20 08:21:23.306055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:09.321 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.321 [2024-11-20 08:21:23.306064] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.306068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.321 [2024-11-20 08:21:23.306071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.306075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.321 [2024-11-20 08:21:23.306078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.321 [2024-11-20 08:21:23.306085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128[2024-11-20 08:21:23.306085] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.321 the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with [2024-11-20 08:21:23.306095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:24:09.322 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.322 [2024-11-20 08:21:23.306104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdec810 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306138] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306213] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306236] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306248] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306254] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12337c0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306420] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e44b0 (9): Bad file descriptor 00:24:09.322 [2024-11-20 08:21:23.306813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.306888] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.307223] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:09.322 [2024-11-20 08:21:23.307426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:24:09.322 [2024-11-20 08:21:23.307470] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe154d0 (9): Bad file descriptor 00:24:09.322 [2024-11-20 08:21:23.307483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:09.322 [2024-11-20 08:21:23.307490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:09.322 [2024-11-20 08:21:23.307497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:09.322 [2024-11-20 08:21:23.307504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:09.322 [2024-11-20 08:21:23.307577] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:09.322 [2024-11-20 08:21:23.308326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.322 [2024-11-20 08:21:23.308347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe154d0 with addr=10.0.0.2, port=4420 00:24:09.322 [2024-11-20 08:21:23.308356] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe154d0 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.308425] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:09.322 [2024-11-20 08:21:23.308532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe154d0 (9): Bad file descriptor 00:24:09.322 [2024-11-20 08:21:23.308611] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:24:09.322 [2024-11-20 08:21:23.308676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:24:09.322 [2024-11-20 08:21:23.308684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:24:09.322 [2024-11-20 08:21:23.308692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:24:09.322 [2024-11-20 08:21:23.308699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:24:09.322 [2024-11-20 08:21:23.308724] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.322 [2024-11-20 08:21:23.308733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.322 [2024-11-20 08:21:23.308741] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.322 [2024-11-20 08:21:23.308748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.322 [2024-11-20 08:21:23.308755] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.322 [2024-11-20 08:21:23.308765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.322 [2024-11-20 08:21:23.308773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.322 [2024-11-20 08:21:23.308779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.322 [2024-11-20 08:21:23.308785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe57a10 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.308813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.322 [2024-11-20 08:21:23.308823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.322 [2024-11-20 08:21:23.308830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.322 [2024-11-20 08:21:23.308836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.322 [2024-11-20 08:21:23.308843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.322 [2024-11-20 08:21:23.308851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.322 [2024-11-20 08:21:23.308859] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.322 [2024-11-20 08:21:23.308865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.322 [2024-11-20 08:21:23.308872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fe610 is same with the state(6) to be set 00:24:09.322 [2024-11-20 08:21:23.308895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.322 [2024-11-20 08:21:23.308903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.322 [2024-11-20 08:21:23.308910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.322 [2024-11-20 08:21:23.308916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.322 [2024-11-20 08:21:23.308924] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.322 [2024-11-20 08:21:23.308931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.322 [2024-11-20 08:21:23.308938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.322 [2024-11-20 08:21:23.308945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.308952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14df0 is same with the state(6) to be set 00:24:09.323 [2024-11-20 08:21:23.308978] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.323 [2024-11-20 08:21:23.308987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.308995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.323 [2024-11-20 08:21:23.309001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.309011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.323 [2024-11-20 08:21:23.309018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.309025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.323 [2024-11-20 08:21:23.309032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.309088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0af20 is same with the state(6) to be set 00:24:09.323 [2024-11-20 08:21:23.309152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e9d50 (9): Bad file descriptor 00:24:09.323 [2024-11-20 08:21:23.309207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ea1b0 (9): Bad file descriptor 00:24:09.323 [2024-11-20 08:21:23.312190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:24:09.323 [2024-11-20 08:21:23.312497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.323 [2024-11-20 08:21:23.312512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe57620 with addr=10.0.0.2, port=4420 00:24:09.323 [2024-11-20 08:21:23.312520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe57620 is same with the state(6) to be set 00:24:09.323 [2024-11-20 08:21:23.312556] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe57620 (9): Bad file descriptor 00:24:09.323 [2024-11-20 08:21:23.312591] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:24:09.323 [2024-11-20 08:21:23.312599] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:24:09.323 [2024-11-20 08:21:23.312606] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:24:09.323 [2024-11-20 08:21:23.312613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:24:09.323 [2024-11-20 08:21:23.314399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:09.323 [2024-11-20 08:21:23.314684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.323 [2024-11-20 08:21:23.314697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e44b0 with addr=10.0.0.2, port=4420 00:24:09.323 [2024-11-20 08:21:23.314705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e44b0 is same with the state(6) to be set 00:24:09.323 [2024-11-20 08:21:23.314739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e44b0 (9): Bad file descriptor 00:24:09.323 [2024-11-20 08:21:23.314774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:09.323 [2024-11-20 08:21:23.314781] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:09.323 [2024-11-20 08:21:23.314788] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:09.323 [2024-11-20 08:21:23.314794] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:09.323 [2024-11-20 08:21:23.317896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:24:09.323 [2024-11-20 08:21:23.318152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.323 [2024-11-20 08:21:23.318164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe154d0 with addr=10.0.0.2, port=4420 00:24:09.323 [2024-11-20 08:21:23.318175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe154d0 is same with the state(6) to be set 00:24:09.323 [2024-11-20 08:21:23.318213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe154d0 (9): Bad file descriptor 00:24:09.323 [2024-11-20 08:21:23.318249] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:24:09.323 [2024-11-20 08:21:23.318256] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:24:09.323 [2024-11-20 08:21:23.318262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:24:09.323 [2024-11-20 08:21:23.318269] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:24:09.323 [2024-11-20 08:21:23.318707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe57a10 (9): Bad file descriptor 00:24:09.323 [2024-11-20 08:21:23.318724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8fe610 (9): Bad file descriptor 00:24:09.323 [2024-11-20 08:21:23.318740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe14df0 (9): Bad file descriptor 00:24:09.323 [2024-11-20 08:21:23.318753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0af20 (9): Bad file descriptor 00:24:09.323 [2024-11-20 08:21:23.318850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.323 [2024-11-20 08:21:23.318861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.318872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.323 [2024-11-20 08:21:23.318880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.318889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.323 [2024-11-20 08:21:23.318896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.318904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.323 [2024-11-20 08:21:23.318912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.318920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.323 [2024-11-20 08:21:23.318926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.318936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.323 [2024-11-20 08:21:23.318942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.318951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.323 [2024-11-20 08:21:23.318957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.318965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.323 [2024-11-20 08:21:23.318973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.318985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.323 [2024-11-20 08:21:23.318991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.319000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.323 [2024-11-20 08:21:23.319007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.319017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.323 [2024-11-20 08:21:23.319025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.319034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.323 [2024-11-20 08:21:23.319041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.319049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.323 [2024-11-20 08:21:23.319056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.319065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.323 [2024-11-20 08:21:23.319071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.319079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.323 [2024-11-20 08:21:23.319086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.319094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.323 [2024-11-20 08:21:23.319101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.319108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.323 [2024-11-20 08:21:23.319115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.319123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.323 [2024-11-20 08:21:23.319130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.323 [2024-11-20 08:21:23.319138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.323 [2024-11-20 08:21:23.319146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.324 [2024-11-20 08:21:23.319749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.324 [2024-11-20 08:21:23.319755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.325 [2024-11-20 08:21:23.319763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.325 [2024-11-20 08:21:23.319772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.325 [2024-11-20 08:21:23.319780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.325 [2024-11-20 08:21:23.319787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.325 [2024-11-20 08:21:23.319795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.325 [2024-11-20 08:21:23.319801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.325 [2024-11-20 08:21:23.319811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.325 [2024-11-20 08:21:23.319818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.325 [2024-11-20 08:21:23.319827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.325 [2024-11-20 08:21:23.319833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.325 [2024-11-20 08:21:23.319842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.325 [2024-11-20 08:21:23.319848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.325 [2024-11-20 08:21:23.319856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbee450 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.320845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.325 [2024-11-20 08:21:23.320858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.325 [2024-11-20 08:21:23.320868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.325 [2024-11-20 08:21:23.320875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.325 [2024-11-20 08:21:23.320883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.325 [2024-11-20 08:21:23.320890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.325 [2024-11-20 08:21:23.320899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.325 [2024-11-20 08:21:23.320905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.325 [2024-11-20 08:21:23.320914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.325 [2024-11-20 08:21:23.320920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.325 [2024-11-20 08:21:23.320928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.325 [2024-11-20 08:21:23.320935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.325 [2024-11-20 08:21:23.320944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.325 [2024-11-20 08:21:23.320952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.325 [2024-11-20 08:21:23.322313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322360] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322409] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322450] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322525] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322643] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.325 [2024-11-20 08:21:23.322742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1233c90 is same with the state(6) to be set 00:24:09.593 [2024-11-20 08:21:23.327922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.327941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.327953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.327962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.327973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.327984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.327996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.593 [2024-11-20 08:21:23.328587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.593 [2024-11-20 08:21:23.328598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.328606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.328618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.328627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.328638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.328648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.328659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.328668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.328679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.328688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.328699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.328708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.328721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.328730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.328741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.328750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.328761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.328770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.328781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.328789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.328801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.328810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.328822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.328830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.328841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.328850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.328860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.328870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.328880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.328890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.328901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.328910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.328920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.328929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.328940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.328948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.328960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.328971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.328983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.328992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.329004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.329013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.329024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.329033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.329045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.329055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.329066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.329075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.329086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.329095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.329105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xbef620 is same with the state(6) to be set 00:24:09.594 [2024-11-20 08:21:23.330411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.330429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.330443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.330453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.330464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.330473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.330484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.330494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.330504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.330514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.330525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.330537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.330548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.330557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.330568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.330576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.330587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.594 [2024-11-20 08:21:23.330596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.594 [2024-11-20 08:21:23.330609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.330619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.330629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.330638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.330650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.330660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.330671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.330681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.330692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.330702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.330713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.330722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.330733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.330742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.330754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.330763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.330774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.330782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.330795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.330804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.330815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.330824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.330836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.330844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.330855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.330865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.330875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.330884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.330895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.330904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.330915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.330924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.330936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.330946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.330957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.330966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.330977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.330987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.330997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.331007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.331018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.331026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.331037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.331049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.331061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.331069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.331080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.331089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.331101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.331109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.331121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.331130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.331140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.331150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.331160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.331170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.331181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.331190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.331206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.331216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.331227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.331236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.331247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.331257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.331268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.331278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.331289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.331299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.331313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.331323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.331334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.331344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.331355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.331365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.331376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.331386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.331397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.595 [2024-11-20 08:21:23.331407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.595 [2024-11-20 08:21:23.331418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.331428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.331439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.331448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.331459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.331468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.331480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.331489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.331501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.331510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.331521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.331530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.331542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.331550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.331562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.331571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.331584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.331595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.331606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.331616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.331628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.331637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.331648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.331657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.331668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.331677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.331688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.331698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.331709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.331718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.331729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.331738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.331748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc73a60 is same with the state(6) to be set 00:24:09.596 [2024-11-20 08:21:23.331810] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:09.596 [2024-11-20 08:21:23.331825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:24:09.596 [2024-11-20 08:21:23.331933] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.596 [2024-11-20 08:21:23.331948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.331970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.596 [2024-11-20 08:21:23.331979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.331990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.596 [2024-11-20 08:21:23.332000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.332014] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:09.596 [2024-11-20 08:21:23.332024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.332033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe47490 is same with the state(6) to be set 00:24:09.596 [2024-11-20 08:21:23.333330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:24:09.596 [2024-11-20 08:21:23.333352] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe47490 (9): Bad file descriptor 00:24:09.596 [2024-11-20 08:21:23.333511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.596 [2024-11-20 08:21:23.333528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ea1b0 with addr=10.0.0.2, port=4420 00:24:09.596 [2024-11-20 08:21:23.333539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ea1b0 is same with the state(6) to be set 00:24:09.596 [2024-11-20 08:21:23.333688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.596 [2024-11-20 08:21:23.333704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e9d50 with addr=10.0.0.2, port=4420 00:24:09.596 [2024-11-20 08:21:23.333714] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9d50 is same with the state(6) to be set 00:24:09.596 [2024-11-20 08:21:23.334331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.334347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.334362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.334372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.334383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.334393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.334405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.334415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.334426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.334436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.334448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.334457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.334469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.334479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.334490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.334500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.334519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.334530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.334541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.334551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.334562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.334572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.334584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.334593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.334605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.334615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.596 [2024-11-20 08:21:23.334626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.596 [2024-11-20 08:21:23.334635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.334647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.334656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.334668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.334677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.334689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.334698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.334709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.334719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.334730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.334739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.334751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.334762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.334773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.334785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.334796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.334806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.334818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.334827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.334838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.334847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.334859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.334868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.334879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.334889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.334901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.334910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.334921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.334931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.334942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.334951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.334963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.334972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.334983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.334992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.335004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.335014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.335026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.335034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.335048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.335057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.335069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.335078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.335089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.335098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.335111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.335122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.335133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.335144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.335155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.335164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.335175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.335185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.335195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.335211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.335223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.335233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.335245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.335254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.335265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.335274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.335286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.335295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.335307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.597 [2024-11-20 08:21:23.335318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.597 [2024-11-20 08:21:23.335329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.335339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.335350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.335360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.335371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.335381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.335392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.335402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.335415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.335425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.335436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.335445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.335464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.335474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.335485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.335495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.335505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.335515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.335526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.335535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.335547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.335557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.335568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.335577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.335591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.335600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.335611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.335621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.335631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.335640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.335651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.335661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.335672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.335682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.335695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.335705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.335715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdedc10 is same with the state(6) to be set 00:24:09.598 [2024-11-20 08:21:23.337019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.598 [2024-11-20 08:21:23.337480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.598 [2024-11-20 08:21:23.337492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.337986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.337996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.338004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.338014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.338021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.338030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.338050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.338059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.338067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.338077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.338085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.338095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.338102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.338111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.338119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.338129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.338136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.338147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.338155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.599 [2024-11-20 08:21:23.338164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.599 [2024-11-20 08:21:23.338171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.338180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.338188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.338196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.338207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.338216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.338223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.338233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.338240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.338249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.338256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.338265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.338272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.338280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdf1ce0 is same with the state(6) to be set 00:24:09.600 [2024-11-20 08:21:23.339322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.600 [2024-11-20 08:21:23.339885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.600 [2024-11-20 08:21:23.339894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.339902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.339911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.339920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.339929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.339937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.339946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.339954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.339963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.339972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.339981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.339989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.339998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.340006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.340015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.340022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.340031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.340039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.340048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.340057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.340066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.340073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.340082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.340090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.340098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.340106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.340115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.340123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.340132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.340138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.340148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.340156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.340165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.340173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.340182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.340191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.340200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.340212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.340221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.340229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.340238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.340245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.340255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.340262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.340273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.340281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.340289] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdeefc0 is same with the state(6) to be set 00:24:09.601 [2024-11-20 08:21:23.341294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.341309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.341322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.341330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.341340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.341347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.341357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.341365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.341375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.341383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.341392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.341400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.341409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.341417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.341426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.341434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.341442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.341450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.341458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.341466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.341475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.341483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.341492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.341506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.341516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.341525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.341535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.341544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.341553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.341561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.341570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.601 [2024-11-20 08:21:23.341578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.601 [2024-11-20 08:21:23.341587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.341986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.341995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.342003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.342011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.342019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.342027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.342035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.342043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.342051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.342059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.342067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.342075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.342083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.342091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.342099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.342108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.342116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.342126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.342133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.342144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.602 [2024-11-20 08:21:23.342151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.602 [2024-11-20 08:21:23.342160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-20 08:21:23.342168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.603 [2024-11-20 08:21:23.342177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-20 08:21:23.342184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.603 [2024-11-20 08:21:23.342193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-20 08:21:23.342205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.603 [2024-11-20 08:21:23.342214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-20 08:21:23.342222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.603 [2024-11-20 08:21:23.342231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-20 08:21:23.342242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.603 [2024-11-20 08:21:23.342251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-20 08:21:23.342259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.603 [2024-11-20 08:21:23.342267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-20 08:21:23.342274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.603 [2024-11-20 08:21:23.342283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-20 08:21:23.342291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.603 [2024-11-20 08:21:23.342299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-20 08:21:23.342309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.603 [2024-11-20 08:21:23.342318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-20 08:21:23.342325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.603 [2024-11-20 08:21:23.342335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-20 08:21:23.342342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.603 [2024-11-20 08:21:23.342352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-20 08:21:23.342362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.603 [2024-11-20 08:21:23.342371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:09.603 [2024-11-20 08:21:23.342380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:09.603 [2024-11-20 08:21:23.342388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xdef890 is same with the state(6) to be set 00:24:09.603 [2024-11-20 08:21:23.343402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:24:09.603 [2024-11-20 08:21:23.343422] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:24:09.603 [2024-11-20 08:21:23.343431] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:24:09.603 [2024-11-20 08:21:23.343440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:24:09.603 [2024-11-20 08:21:23.343450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:24:09.603 [2024-11-20 08:21:23.343501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ea1b0 (9): Bad file descriptor 00:24:09.603 [2024-11-20 08:21:23.343513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e9d50 (9): Bad file descriptor 00:24:09.603 [2024-11-20 08:21:23.343542] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:24:09.603 [2024-11-20 08:21:23.343556] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] Unable to perform failover, already in progress. 00:24:09.603 [2024-11-20 08:21:23.343570] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:24:09.603 [2024-11-20 08:21:23.343582] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:24:09.603 [2024-11-20 08:21:23.343871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:24:09.603 task offset: 25472 on job bdev=Nvme10n1 fails 00:24:09.603 00:24:09.603 Latency(us) 00:24:09.603 [2024-11-20T07:21:23.631Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:09.603 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.603 Job: Nvme1n1 ended in about 0.95 seconds with error 00:24:09.603 Verification LBA range: start 0x0 length 0x400 00:24:09.603 Nvme1n1 : 0.95 202.37 12.65 67.46 0.00 234773.46 15541.39 211712.49 00:24:09.603 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.603 Job: Nvme2n1 ended in about 0.96 seconds with error 00:24:09.603 Verification LBA range: start 0x0 length 0x400 00:24:09.603 Nvme2n1 : 0.96 200.40 12.53 66.80 0.00 233173.58 17351.44 221698.93 00:24:09.603 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.603 Job: Nvme3n1 ended in about 0.93 seconds with error 00:24:09.603 Verification LBA range: start 0x0 length 0x400 00:24:09.603 Nvme3n1 : 0.93 280.32 17.52 68.74 0.00 175263.96 3105.16 206719.27 00:24:09.603 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.603 Job: Nvme4n1 ended in about 0.94 seconds with error 00:24:09.603 Verification LBA range: start 0x0 length 0x400 00:24:09.603 Nvme4n1 : 0.94 273.73 17.11 21.39 0.00 203511.70 1443.35 219701.64 00:24:09.603 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.603 Job: Nvme5n1 ended in about 0.96 seconds with error 00:24:09.603 Verification LBA range: start 0x0 length 0x400 00:24:09.603 Nvme5n1 : 0.96 199.03 12.44 66.34 0.00 223205.42 17101.78 220700.28 00:24:09.603 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.603 Job: Nvme6n1 ended in about 0.97 seconds with error 00:24:09.603 Verification LBA range: start 0x0 length 0x400 00:24:09.603 Nvme6n1 : 0.97 198.52 12.41 66.17 0.00 219985.43 14917.24 221698.93 00:24:09.603 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.603 Job: Nvme7n1 ended in about 0.97 seconds with error 00:24:09.603 Verification LBA range: start 0x0 length 0x400 00:24:09.603 Nvme7n1 : 0.97 207.40 12.96 56.75 0.00 215493.97 25964.74 204721.98 00:24:09.603 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.603 Job: Nvme8n1 ended in about 0.97 seconds with error 00:24:09.603 Verification LBA range: start 0x0 length 0x400 00:24:09.603 Nvme8n1 : 0.97 202.83 12.68 64.87 0.00 209847.95 16602.45 208716.56 00:24:09.603 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.603 Job: Nvme9n1 ended in about 0.96 seconds with error 00:24:09.603 Verification LBA range: start 0x0 length 0x400 00:24:09.603 Nvme9n1 : 0.96 199.78 12.49 66.59 0.00 206937.23 18724.57 228689.43 00:24:09.603 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:24:09.603 Job: Nvme10n1 ended in about 0.93 seconds with error 00:24:09.603 Verification LBA range: start 0x0 length 0x400 00:24:09.603 Nvme10n1 : 0.93 206.77 12.92 68.92 0.00 194994.35 15291.73 237677.23 00:24:09.603 [2024-11-20T07:21:23.631Z] =================================================================================================================== 00:24:09.603 [2024-11-20T07:21:23.631Z] Total : 2171.16 135.70 614.03 0.00 210705.89 1443.35 237677.23 00:24:09.603 [2024-11-20 08:21:23.378344] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:24:09.603 [2024-11-20 08:21:23.378396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:24:09.603 [2024-11-20 08:21:23.378702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.603 [2024-11-20 08:21:23.378722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe47490 with addr=10.0.0.2, port=4420 00:24:09.603 [2024-11-20 08:21:23.378735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe47490 is same with the state(6) to be set 00:24:09.603 [2024-11-20 08:21:23.378883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.603 [2024-11-20 08:21:23.378897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe57620 with addr=10.0.0.2, port=4420 00:24:09.603 [2024-11-20 08:21:23.378905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe57620 is same with the state(6) to be set 00:24:09.603 [2024-11-20 08:21:23.379117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.603 [2024-11-20 08:21:23.379130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e44b0 with addr=10.0.0.2, port=4420 00:24:09.603 [2024-11-20 08:21:23.379138] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e44b0 is same with the state(6) to be set 00:24:09.603 [2024-11-20 08:21:23.379307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.603 [2024-11-20 08:21:23.379320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe154d0 with addr=10.0.0.2, port=4420 00:24:09.603 [2024-11-20 08:21:23.379329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe154d0 is same with the state(6) to be set 00:24:09.603 [2024-11-20 08:21:23.379490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.603 [2024-11-20 08:21:23.379502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe14df0 with addr=10.0.0.2, port=4420 00:24:09.603 [2024-11-20 08:21:23.379511] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe14df0 is same with the state(6) to be set 00:24:09.603 [2024-11-20 08:21:23.379645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.603 [2024-11-20 08:21:23.379663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0af20 with addr=10.0.0.2, port=4420 00:24:09.604 [2024-11-20 08:21:23.379671] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0af20 is same with the state(6) to be set 00:24:09.604 [2024-11-20 08:21:23.379680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:09.604 [2024-11-20 08:21:23.379687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:09.604 [2024-11-20 08:21:23.379696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:09.604 [2024-11-20 08:21:23.379706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:09.604 [2024-11-20 08:21:23.379715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:24:09.604 [2024-11-20 08:21:23.379722] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:24:09.604 [2024-11-20 08:21:23.379728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:24:09.604 [2024-11-20 08:21:23.379735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:24:09.604 [2024-11-20 08:21:23.380863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.604 [2024-11-20 08:21:23.380884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x8fe610 with addr=10.0.0.2, port=4420 00:24:09.604 [2024-11-20 08:21:23.380893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8fe610 is same with the state(6) to be set 00:24:09.604 [2024-11-20 08:21:23.381031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.604 [2024-11-20 08:21:23.381044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe57a10 with addr=10.0.0.2, port=4420 00:24:09.604 [2024-11-20 08:21:23.381051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe57a10 is same with the state(6) to be set 00:24:09.604 [2024-11-20 08:21:23.381065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe47490 (9): Bad file descriptor 00:24:09.604 [2024-11-20 08:21:23.381078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe57620 (9): Bad file descriptor 00:24:09.604 [2024-11-20 08:21:23.381089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e44b0 (9): Bad file descriptor 00:24:09.604 [2024-11-20 08:21:23.381098] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe154d0 (9): Bad file descriptor 00:24:09.604 [2024-11-20 08:21:23.381107] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe14df0 (9): Bad file descriptor 00:24:09.604 [2024-11-20 08:21:23.381117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0af20 (9): Bad file descriptor 00:24:09.604 [2024-11-20 08:21:23.381159] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] Unable to perform failover, already in progress. 00:24:09.604 [2024-11-20 08:21:23.381172] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] Unable to perform failover, already in progress. 00:24:09.604 [2024-11-20 08:21:23.381181] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:24:09.604 [2024-11-20 08:21:23.381190] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] Unable to perform failover, already in progress. 00:24:09.604 [2024-11-20 08:21:23.381225] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] Unable to perform failover, already in progress. 00:24:09.604 [2024-11-20 08:21:23.381236] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] Unable to perform failover, already in progress. 00:24:09.604 [2024-11-20 08:21:23.381329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x8fe610 (9): Bad file descriptor 00:24:09.604 [2024-11-20 08:21:23.381341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe57a10 (9): Bad file descriptor 00:24:09.604 [2024-11-20 08:21:23.381350] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:24:09.604 [2024-11-20 08:21:23.381356] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:24:09.604 [2024-11-20 08:21:23.381363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:24:09.604 [2024-11-20 08:21:23.381371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:24:09.604 [2024-11-20 08:21:23.381378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:24:09.604 [2024-11-20 08:21:23.381385] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:24:09.604 [2024-11-20 08:21:23.381391] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:24:09.604 [2024-11-20 08:21:23.381399] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:24:09.604 [2024-11-20 08:21:23.381406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:24:09.604 [2024-11-20 08:21:23.381412] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:24:09.604 [2024-11-20 08:21:23.381419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:24:09.604 [2024-11-20 08:21:23.381425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:24:09.604 [2024-11-20 08:21:23.381432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:24:09.604 [2024-11-20 08:21:23.381438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:24:09.604 [2024-11-20 08:21:23.381445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:24:09.604 [2024-11-20 08:21:23.381451] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:24:09.604 [2024-11-20 08:21:23.381459] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:24:09.604 [2024-11-20 08:21:23.381466] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:24:09.604 [2024-11-20 08:21:23.381472] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:24:09.604 [2024-11-20 08:21:23.381478] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:24:09.604 [2024-11-20 08:21:23.381487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:24:09.604 [2024-11-20 08:21:23.381494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:24:09.604 [2024-11-20 08:21:23.381501] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:24:09.604 [2024-11-20 08:21:23.381507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:24:09.604 [2024-11-20 08:21:23.381565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:24:09.604 [2024-11-20 08:21:23.381577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:24:09.604 [2024-11-20 08:21:23.381599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:24:09.604 [2024-11-20 08:21:23.381606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:24:09.604 [2024-11-20 08:21:23.381613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:24:09.604 [2024-11-20 08:21:23.381619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:24:09.604 [2024-11-20 08:21:23.381626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:24:09.604 [2024-11-20 08:21:23.381633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:24:09.604 [2024-11-20 08:21:23.381639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:24:09.604 [2024-11-20 08:21:23.381645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:24:09.604 [2024-11-20 08:21:23.381801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.604 [2024-11-20 08:21:23.381815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9e9d50 with addr=10.0.0.2, port=4420 00:24:09.604 [2024-11-20 08:21:23.381823] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e9d50 is same with the state(6) to be set 00:24:09.604 [2024-11-20 08:21:23.382067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:09.604 [2024-11-20 08:21:23.382078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9ea1b0 with addr=10.0.0.2, port=4420 00:24:09.604 [2024-11-20 08:21:23.382085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9ea1b0 is same with the state(6) to be set 00:24:09.604 [2024-11-20 08:21:23.382110] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9e9d50 (9): Bad file descriptor 00:24:09.604 [2024-11-20 08:21:23.382121] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x9ea1b0 (9): Bad file descriptor 00:24:09.604 [2024-11-20 08:21:23.382145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:24:09.604 [2024-11-20 08:21:23.382153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:24:09.604 [2024-11-20 08:21:23.382160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:24:09.604 [2024-11-20 08:21:23.382166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:24:09.604 [2024-11-20 08:21:23.382174] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:24:09.605 [2024-11-20 08:21:23.382181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:24:09.605 [2024-11-20 08:21:23.382188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:24:09.605 [2024-11-20 08:21:23.382194] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:24:09.864 08:21:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 1759177 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1759177 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 1759177 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@99 -- # sync 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # set +e 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:10.802 rmmod nvme_tcp 00:24:10.802 rmmod nvme_fabrics 00:24:10.802 rmmod nvme_keyring 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # set -e 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # return 0 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # '[' -n 1758891 ']' 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@337 -- # killprocess 1758891 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 1758891 ']' 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 1758891 00:24:10.802 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1758891) - No such process 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1758891 is not found' 00:24:10.802 Process with pid 1758891 is not found 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # nvmf_fini 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@254 -- # local dev 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:10.802 08:21:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@121 -- # return 0 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@41 -- # _dev=0 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@41 -- # dev_map=() 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@274 -- # iptr 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@548 -- # iptables-save 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@548 -- # iptables-restore 00:24:13.342 00:24:13.342 real 0m7.878s 00:24:13.342 user 0m18.935s 00:24:13.342 sys 0m1.431s 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:24:13.342 ************************************ 00:24:13.342 END TEST nvmf_shutdown_tc3 00:24:13.342 ************************************ 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:13.342 ************************************ 00:24:13.342 START TEST nvmf_shutdown_tc4 00:24:13.342 ************************************ 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:24:13.342 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@260 -- # remove_target_ns 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # xtrace_disable 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@131 -- # pci_devs=() 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@135 -- # net_devs=() 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@136 -- # e810=() 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@136 -- # local -ga e810 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@137 -- # x722=() 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@137 -- # local -ga x722 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@138 -- # mlx=() 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@138 -- # local -ga mlx 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:13.343 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:13.343 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:13.343 Found net devices under 0000:86:00.0: cvl_0_0 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:13.343 Found net devices under 0000:86:00.1: cvl_0_1 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # is_hw=yes 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:13.343 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@247 -- # create_target_ns 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@28 -- # local -g _dev 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@44 -- # ips=() 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:13.344 08:21:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@11 -- # local val=167772161 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:13.344 10.0.0.1 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@11 -- # local val=167772162 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:13.344 10.0.0.2 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:24:13.344 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:13.345 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:13.345 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:24:13.345 00:24:13.345 --- 10.0.0.1 ping statistics --- 00:24:13.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.345 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=target0 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:24:13.345 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:13.345 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:24:13.345 00:24:13.345 --- 10.0.0.2 ping statistics --- 00:24:13.345 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:13.345 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@270 -- # return 0 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # return 1 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev= 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@160 -- # return 0 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:13.345 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=target0 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # local dev=target1 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # return 1 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@159 -- # dev= 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@160 -- # return 0 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:24:13.346 ' 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:13.346 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:13.605 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:24:13.605 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:13.605 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:13.605 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:13.605 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # nvmfpid=1760383 00:24:13.605 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@329 -- # waitforlisten 1760383 00:24:13.605 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:24:13.605 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 1760383 ']' 00:24:13.605 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:13.605 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:13.605 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:13.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:13.605 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:13.605 08:21:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:13.605 [2024-11-20 08:21:27.446840] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:24:13.605 [2024-11-20 08:21:27.446886] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:13.605 [2024-11-20 08:21:27.522984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:13.605 [2024-11-20 08:21:27.565045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.605 [2024-11-20 08:21:27.565081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.605 [2024-11-20 08:21:27.565088] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.605 [2024-11-20 08:21:27.565094] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.605 [2024-11-20 08:21:27.565099] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.605 [2024-11-20 08:21:27.566576] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:13.605 [2024-11-20 08:21:27.566688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:13.605 [2024-11-20 08:21:27.566797] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:13.605 [2024-11-20 08:21:27.566798] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:24:14.542 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:14.543 [2024-11-20 08:21:28.311898] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:14.543 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:14.543 Malloc1 00:24:14.543 [2024-11-20 08:21:28.431466] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:14.543 Malloc2 00:24:14.543 Malloc3 00:24:14.543 Malloc4 00:24:14.802 Malloc5 00:24:14.802 Malloc6 00:24:14.802 Malloc7 00:24:14.802 Malloc8 00:24:14.802 Malloc9 00:24:14.802 Malloc10 00:24:14.802 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:14.802 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:24:14.802 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:14.802 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:15.061 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=1760690 00:24:15.061 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:24:15.061 08:21:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:24:15.061 [2024-11-20 08:21:28.940857] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:20.348 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:20.348 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 1760383 00:24:20.348 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1760383 ']' 00:24:20.348 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1760383 00:24:20.348 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:24:20.348 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.348 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1760383 00:24:20.348 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:20.348 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:20.348 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1760383' 00:24:20.348 killing process with pid 1760383 00:24:20.348 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 1760383 00:24:20.348 08:21:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 1760383 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 [2024-11-20 08:21:33.933689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b68ef0 is same with the state(6) to be set 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 [2024-11-20 08:21:33.933735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b68ef0 is same with the state(6) to be set 00:24:20.348 [2024-11-20 08:21:33.933743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b68ef0 is same with the state(6) to be set 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 [2024-11-20 08:21:33.933863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 [2024-11-20 08:21:33.934261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b693c0 is same with the state(6) to be set 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 starting I/O failed: -6 00:24:20.348 Write completed with error (sct=0, sc=8) 00:24:20.348 [2024-11-20 08:21:33.934842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.348 NVMe io qpair process completion error 00:24:20.349 [2024-11-20 08:21:33.943586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdca0 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.943617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdca0 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.943625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdca0 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.943632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfdca0 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe170 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944070] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe170 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944077] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe170 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe640 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe640 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe640 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944426] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe640 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944432] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe640 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe640 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe640 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe640 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944458] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe640 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944464] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe640 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944470] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe640 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe640 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe640 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944490] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe640 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944496] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfe640 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd7d0 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944954] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd7d0 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd7d0 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd7d0 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd7d0 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.944981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfd7d0 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.946089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfefe0 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.946111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfefe0 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.946118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfefe0 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.946130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfefe0 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.946137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfefe0 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.946523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff4b0 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.946545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff4b0 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.946553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff4b0 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.946560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff4b0 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.947047] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff980 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.947068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff980 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.947075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff980 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.947082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff980 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.947089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff980 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.947095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff980 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.947102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff980 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.947108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff980 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.947114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff980 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.947120] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bff980 is same with the state(6) to be set 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 starting I/O failed: -6 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 starting I/O failed: -6 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 starting I/O failed: -6 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 starting I/O failed: -6 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 starting I/O failed: -6 00:24:20.349 [2024-11-20 08:21:33.947652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfeb10 is same with the state(6) to be set 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 [2024-11-20 08:21:33.947676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfeb10 is same with the state(6) to be set 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 [2024-11-20 08:21:33.947684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfeb10 is same with the state(6) to be set 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 [2024-11-20 08:21:33.947692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfeb10 is same with the state(6) to be set 00:24:20.349 [2024-11-20 08:21:33.947704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfeb10 is same with Write completed with error (sct=0, sc=8) 00:24:20.349 the state(6) to be set 00:24:20.349 starting I/O failed: -6 00:24:20.349 [2024-11-20 08:21:33.947712] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bfeb10 is same with the state(6) to be set 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 starting I/O failed: -6 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 starting I/O failed: -6 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 starting I/O failed: -6 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 starting I/O failed: -6 00:24:20.349 [2024-11-20 08:21:33.947988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 starting I/O failed: -6 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 starting I/O failed: -6 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 starting I/O failed: -6 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 starting I/O failed: -6 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 starting I/O failed: -6 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 starting I/O failed: -6 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 starting I/O failed: -6 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 starting I/O failed: -6 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.349 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 [2024-11-20 08:21:33.948893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 [2024-11-20 08:21:33.949420] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c00320 is same with the state(6) to be set 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 [2024-11-20 08:21:33.949440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c00320 is same with the state(6) to be set 00:24:20.350 starting I/O failed: -6 00:24:20.350 [2024-11-20 08:21:33.949448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c00320 is same with the state(6) to be set 00:24:20.350 [2024-11-20 08:21:33.949455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c00320 is same with the state(6) to be set 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 [2024-11-20 08:21:33.949462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c00320 is same with the state(6) to be set 00:24:20.350 starting I/O failed: -6 00:24:20.350 [2024-11-20 08:21:33.949469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c00320 is same with the state(6) to be set 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 [2024-11-20 08:21:33.949865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c007f0 is same with the state(6) to be set 00:24:20.350 [2024-11-20 08:21:33.949883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:20.350 [2024-11-20 08:21:33.949887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c007f0 is same with the state(6) to be set 00:24:20.350 [2024-11-20 08:21:33.949895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c007f0 is same with the state(6) to be set 00:24:20.350 [2024-11-20 08:21:33.949907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c007f0 is same with the state(6) to be set 00:24:20.350 [2024-11-20 08:21:33.949914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c007f0 is same with the state(6) to be set 00:24:20.350 [2024-11-20 08:21:33.949920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c007f0 is same with the state(6) to be set 00:24:20.350 [2024-11-20 08:21:33.949926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c007f0 is same with the state(6) to be set 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 [2024-11-20 08:21:33.950192] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c00cc0 is same with the state(6) to be set 00:24:20.350 starting I/O failed: -6 00:24:20.350 [2024-11-20 08:21:33.950214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c00cc0 is same with the state(6) to be set 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 [2024-11-20 08:21:33.950221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c00cc0 is same with the state(6) to be set 00:24:20.350 starting I/O failed: -6 00:24:20.350 [2024-11-20 08:21:33.950227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c00cc0 is same with the state(6) to be set 00:24:20.350 [2024-11-20 08:21:33.950235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c00cc0 is same with the state(6) to be set 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 [2024-11-20 08:21:33.950242] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c00cc0 is same with the state(6) to be set 00:24:20.350 starting I/O failed: -6 00:24:20.350 [2024-11-20 08:21:33.950249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c00cc0 is same with the state(6) to be set 00:24:20.350 [2024-11-20 08:21:33.950256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c00cc0 is same with the state(6) to be set 00:24:20.350 Write completed with error (sct=0, sc=8) 00:24:20.350 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 [2024-11-20 08:21:33.950528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffe50 is same with the state(6) to be set 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 [2024-11-20 08:21:33.950544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffe50 is same with the state(6) to be set 00:24:20.351 [2024-11-20 08:21:33.950554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffe50 is same with the state(6) to be set 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 [2024-11-20 08:21:33.950561] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffe50 is same with the state(6) to be set 00:24:20.351 starting I/O failed: -6 00:24:20.351 [2024-11-20 08:21:33.950567] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffe50 is same with the state(6) to be set 00:24:20.351 [2024-11-20 08:21:33.950574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffe50 is same with the state(6) to be set 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 [2024-11-20 08:21:33.950580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bffe50 is same with the state(6) to be set 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 [2024-11-20 08:21:33.951634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:20.351 NVMe io qpair process completion error 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 [2024-11-20 08:21:33.952655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 Write completed with error (sct=0, sc=8) 00:24:20.351 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 [2024-11-20 08:21:33.953566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 [2024-11-20 08:21:33.954541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.352 Write completed with error (sct=0, sc=8) 00:24:20.352 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 [2024-11-20 08:21:33.956145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:20.353 NVMe io qpair process completion error 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 [2024-11-20 08:21:33.957055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 [2024-11-20 08:21:33.957934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.353 starting I/O failed: -6 00:24:20.353 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 [2024-11-20 08:21:33.958956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 [2024-11-20 08:21:33.961045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.354 NVMe io qpair process completion error 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 [2024-11-20 08:21:33.962053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 Write completed with error (sct=0, sc=8) 00:24:20.354 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 [2024-11-20 08:21:33.962972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 [2024-11-20 08:21:33.963961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.355 starting I/O failed: -6 00:24:20.355 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 [2024-11-20 08:21:33.967898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.356 NVMe io qpair process completion error 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 Write completed with error (sct=0, sc=8) 00:24:20.356 starting I/O failed: -6 00:24:20.356 [2024-11-20 08:21:33.973147] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:20.356 starting I/O failed: -6 00:24:20.356 starting I/O failed: -6 00:24:20.356 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 [2024-11-20 08:21:33.974084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 [2024-11-20 08:21:33.975109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.357 Write completed with error (sct=0, sc=8) 00:24:20.357 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 [2024-11-20 08:21:33.976757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.358 NVMe io qpair process completion error 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 [2024-11-20 08:21:33.977744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 [2024-11-20 08:21:33.978618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 Write completed with error (sct=0, sc=8) 00:24:20.358 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 [2024-11-20 08:21:33.979651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 [2024-11-20 08:21:33.981733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.359 NVMe io qpair process completion error 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 Write completed with error (sct=0, sc=8) 00:24:20.359 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 [2024-11-20 08:21:33.984134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.360 Write completed with error (sct=0, sc=8) 00:24:20.360 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 [2024-11-20 08:21:33.988333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:20.361 NVMe io qpair process completion error 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.361 Write completed with error (sct=0, sc=8) 00:24:20.361 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 [2024-11-20 08:21:33.990640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 [2024-11-20 08:21:33.994136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:20.362 NVMe io qpair process completion error 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.362 starting I/O failed: -6 00:24:20.362 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 [2024-11-20 08:21:33.995200] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 [2024-11-20 08:21:33.995992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.363 starting I/O failed: -6 00:24:20.363 Write completed with error (sct=0, sc=8) 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 [2024-11-20 08:21:33.997041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.364 Write completed with error (sct=0, sc=8) 00:24:20.364 starting I/O failed: -6 00:24:20.365 Write completed with error (sct=0, sc=8) 00:24:20.365 starting I/O failed: -6 00:24:20.365 Write completed with error (sct=0, sc=8) 00:24:20.365 starting I/O failed: -6 00:24:20.365 Write completed with error (sct=0, sc=8) 00:24:20.365 starting I/O failed: -6 00:24:20.365 Write completed with error (sct=0, sc=8) 00:24:20.365 starting I/O failed: -6 00:24:20.365 Write completed with error (sct=0, sc=8) 00:24:20.365 starting I/O failed: -6 00:24:20.365 Write completed with error (sct=0, sc=8) 00:24:20.365 starting I/O failed: -6 00:24:20.365 Write completed with error (sct=0, sc=8) 00:24:20.365 starting I/O failed: -6 00:24:20.365 Write completed with error (sct=0, sc=8) 00:24:20.365 starting I/O failed: -6 00:24:20.365 Write completed with error (sct=0, sc=8) 00:24:20.365 starting I/O failed: -6 00:24:20.365 Write completed with error (sct=0, sc=8) 00:24:20.365 starting I/O failed: -6 00:24:20.365 Write completed with error (sct=0, sc=8) 00:24:20.365 starting I/O failed: -6 00:24:20.365 Write completed with error (sct=0, sc=8) 00:24:20.365 starting I/O failed: -6 00:24:20.365 Write completed with error (sct=0, sc=8) 00:24:20.365 starting I/O failed: -6 00:24:20.365 Write completed with error (sct=0, sc=8) 00:24:20.365 starting I/O failed: -6 00:24:20.365 Write completed with error (sct=0, sc=8) 00:24:20.365 starting I/O failed: -6 00:24:20.365 Write completed with error (sct=0, sc=8) 00:24:20.365 starting I/O failed: -6 00:24:20.365 Write completed with error (sct=0, sc=8) 00:24:20.365 starting I/O failed: -6 00:24:20.365 Write completed with error (sct=0, sc=8) 00:24:20.365 starting I/O failed: -6 00:24:20.365 [2024-11-20 08:21:33.999468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:24:20.365 NVMe io qpair process completion error 00:24:20.365 Initializing NVMe Controllers 00:24:20.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:24:20.365 Controller IO queue size 128, less than required. 00:24:20.365 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:24:20.365 Controller IO queue size 128, less than required. 00:24:20.365 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:24:20.365 Controller IO queue size 128, less than required. 00:24:20.365 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:24:20.365 Controller IO queue size 128, less than required. 00:24:20.365 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:24:20.365 Controller IO queue size 128, less than required. 00:24:20.365 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:24:20.365 Controller IO queue size 128, less than required. 00:24:20.365 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:24:20.365 Controller IO queue size 128, less than required. 00:24:20.365 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:24:20.365 Controller IO queue size 128, less than required. 00:24:20.365 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:24:20.365 Controller IO queue size 128, less than required. 00:24:20.365 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:24:20.365 Controller IO queue size 128, less than required. 00:24:20.365 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:24:20.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:24:20.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:24:20.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:24:20.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:24:20.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:24:20.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:24:20.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:24:20.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:24:20.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:24:20.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:24:20.365 Initialization complete. Launching workers. 00:24:20.365 ======================================================== 00:24:20.365 Latency(us) 00:24:20.365 Device Information : IOPS MiB/s Average min max 00:24:20.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2180.60 93.70 58704.07 852.54 113794.69 00:24:20.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2232.55 95.93 57348.81 698.26 113177.04 00:24:20.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2235.54 96.06 57288.74 725.39 110570.53 00:24:20.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2162.86 92.94 59139.31 509.92 108990.19 00:24:20.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2189.79 94.09 58533.26 658.12 109052.19 00:24:20.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2173.76 93.40 58975.62 821.22 107874.16 00:24:20.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2213.10 95.09 57939.51 629.39 112525.69 00:24:20.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2246.66 96.54 57109.89 653.95 105356.74 00:24:20.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2294.98 98.61 55939.73 793.76 119282.53 00:24:20.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2182.53 93.78 58184.68 714.27 105180.36 00:24:20.365 ======================================================== 00:24:20.365 Total : 22112.37 950.14 57899.84 509.92 119282.53 00:24:20.365 00:24:20.365 [2024-11-20 08:21:34.002401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b1ae0 is same with the state(6) to be set 00:24:20.365 [2024-11-20 08:21:34.002447] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0740 is same with the state(6) to be set 00:24:20.365 [2024-11-20 08:21:34.002476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af560 is same with the state(6) to be set 00:24:20.366 [2024-11-20 08:21:34.002505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b1720 is same with the state(6) to be set 00:24:20.366 [2024-11-20 08:21:34.002534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14afef0 is same with the state(6) to be set 00:24:20.366 [2024-11-20 08:21:34.002563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0a70 is same with the state(6) to be set 00:24:20.366 [2024-11-20 08:21:34.002591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14af890 is same with the state(6) to be set 00:24:20.366 [2024-11-20 08:21:34.002619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14afbc0 is same with the state(6) to be set 00:24:20.366 [2024-11-20 08:21:34.002647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b0410 is same with the state(6) to be set 00:24:20.366 [2024-11-20 08:21:34.002676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14b1900 is same with the state(6) to be set 00:24:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:24:20.366 08:21:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:24:21.305 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 1760690 00:24:21.305 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:24:21.305 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1760690 00:24:21.305 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:24:21.305 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:21.305 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:24:21.305 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:21.305 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 1760690 00:24:21.305 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:24:21.305 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:21.305 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:21.305 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:21.305 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:24:21.305 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:24:21.305 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:24:21.305 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:24:21.305 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:24:21.305 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:21.305 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@99 -- # sync 00:24:21.617 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:21.617 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@102 -- # set +e 00:24:21.617 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:21.617 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:21.617 rmmod nvme_tcp 00:24:21.617 rmmod nvme_fabrics 00:24:21.617 rmmod nvme_keyring 00:24:21.617 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:21.617 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # set -e 00:24:21.617 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # return 0 00:24:21.617 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # '[' -n 1760383 ']' 00:24:21.617 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@337 -- # killprocess 1760383 00:24:21.617 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 1760383 ']' 00:24:21.617 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 1760383 00:24:21.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1760383) - No such process 00:24:21.617 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 1760383 is not found' 00:24:21.617 Process with pid 1760383 is not found 00:24:21.617 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:21.617 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # nvmf_fini 00:24:21.617 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@254 -- # local dev 00:24:21.617 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:21.617 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:21.617 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:21.617 08:21:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@121 -- # return 0 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@41 -- # _dev=0 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@41 -- # dev_map=() 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@274 -- # iptr 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@548 -- # iptables-save 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@548 -- # iptables-restore 00:24:23.556 00:24:23.556 real 0m10.524s 00:24:23.556 user 0m27.606s 00:24:23.556 sys 0m5.263s 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:24:23.556 ************************************ 00:24:23.556 END TEST nvmf_shutdown_tc4 00:24:23.556 ************************************ 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:24:23.556 00:24:23.556 real 0m42.666s 00:24:23.556 user 1m45.898s 00:24:23.556 sys 0m14.222s 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:23.556 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:24:23.556 ************************************ 00:24:23.556 END TEST nvmf_shutdown 00:24:23.556 ************************************ 00:24:23.557 08:21:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:23.557 08:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:23.557 08:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:23.557 08:21:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:23.817 ************************************ 00:24:23.817 START TEST nvmf_nsid 00:24:23.817 ************************************ 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:24:23.817 * Looking for test storage... 00:24:23.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:23.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.817 --rc genhtml_branch_coverage=1 00:24:23.817 --rc genhtml_function_coverage=1 00:24:23.817 --rc genhtml_legend=1 00:24:23.817 --rc geninfo_all_blocks=1 00:24:23.817 --rc geninfo_unexecuted_blocks=1 00:24:23.817 00:24:23.817 ' 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:23.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.817 --rc genhtml_branch_coverage=1 00:24:23.817 --rc genhtml_function_coverage=1 00:24:23.817 --rc genhtml_legend=1 00:24:23.817 --rc geninfo_all_blocks=1 00:24:23.817 --rc geninfo_unexecuted_blocks=1 00:24:23.817 00:24:23.817 ' 00:24:23.817 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:23.817 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.817 --rc genhtml_branch_coverage=1 00:24:23.817 --rc genhtml_function_coverage=1 00:24:23.817 --rc genhtml_legend=1 00:24:23.817 --rc geninfo_all_blocks=1 00:24:23.817 --rc geninfo_unexecuted_blocks=1 00:24:23.817 00:24:23.818 ' 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:23.818 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.818 --rc genhtml_branch_coverage=1 00:24:23.818 --rc genhtml_function_coverage=1 00:24:23.818 --rc genhtml_legend=1 00:24:23.818 --rc geninfo_all_blocks=1 00:24:23.818 --rc geninfo_unexecuted_blocks=1 00:24:23.818 00:24:23.818 ' 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@50 -- # : 0 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:23.818 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@260 -- # remove_target_ns 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # xtrace_disable 00:24:23.818 08:21:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@131 -- # pci_devs=() 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@135 -- # net_devs=() 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@136 -- # e810=() 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@136 -- # local -ga e810 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@137 -- # x722=() 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@137 -- # local -ga x722 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@138 -- # mlx=() 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@138 -- # local -ga mlx 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:30.394 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:30.394 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:30.394 Found net devices under 0000:86:00.0: cvl_0_0 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:30.394 Found net devices under 0000:86:00.1: cvl_0_1 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # is_hw=yes 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@247 -- # create_target_ns 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:24:30.394 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@28 -- # local -g _dev 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # ips=() 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772161 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:30.395 10.0.0.1 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772162 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:30.395 10.0.0.2 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:30.395 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:30.395 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.403 ms 00:24:30.395 00:24:30.395 --- 10.0.0.1 ping statistics --- 00:24:30.395 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.395 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target0 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:30.395 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:24:30.396 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:30.396 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.142 ms 00:24:30.396 00:24:30.396 --- 10.0.0.2 ping statistics --- 00:24:30.396 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:30.396 rtt min/avg/max/mdev = 0.142/0.142/0.142/0.000 ms 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@270 -- # return 0 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # return 1 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev= 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@160 -- # return 0 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target0 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=target1 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # return 1 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev= 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@160 -- # return 0 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:24:30.396 ' 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # nvmfpid=1765224 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@329 -- # waitforlisten 1765224 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1765224 ']' 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:30.396 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:30.396 08:21:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:30.396 [2024-11-20 08:21:43.916387] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:24:30.396 [2024-11-20 08:21:43.916429] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:30.396 [2024-11-20 08:21:43.994818] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.396 [2024-11-20 08:21:44.035250] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:30.396 [2024-11-20 08:21:44.035285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:30.396 [2024-11-20 08:21:44.035292] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:30.396 [2024-11-20 08:21:44.035297] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:30.396 [2024-11-20 08:21:44.035303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:30.396 [2024-11-20 08:21:44.035845] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:30.396 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:30.396 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:30.396 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=1765245 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=c865b0b9-e8c4-420e-b1e4-6262a9f420e4 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=40fbecb6-f4cb-48af-82df-bf2a993f468f 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=0a6a4e5e-4881-488d-93e6-0f3ed04d233d 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:30.397 null0 00:24:30.397 [2024-11-20 08:21:44.217879] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:24:30.397 [2024-11-20 08:21:44.217921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1765245 ] 00:24:30.397 null1 00:24:30.397 null2 00:24:30.397 [2024-11-20 08:21:44.234611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.397 [2024-11-20 08:21:44.258802] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:30.397 [2024-11-20 08:21:44.290535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 1765245 /var/tmp/tgt2.sock 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 1765245 ']' 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:24:30.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:30.397 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:30.397 [2024-11-20 08:21:44.331309] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:30.656 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:30.656 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:24:30.656 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:24:30.915 [2024-11-20 08:21:44.853143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:30.915 [2024-11-20 08:21:44.869254] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:24:30.915 nvme0n1 nvme0n2 00:24:30.915 nvme1n1 00:24:30.915 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:24:30.915 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:24:30.915 08:21:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:32.295 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:24:32.295 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:24:32.295 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:24:32.295 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:24:32.295 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:24:32.295 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:24:32.295 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:24:32.295 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:32.295 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:32.295 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:32.295 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:24:32.295 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:24:32.295 08:21:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:24:33.229 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:33.229 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:24:33.229 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:33.229 08:21:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid c865b0b9-e8c4-420e-b1e4-6262a9f420e4 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c865b0b9e8c4420eb1e46262a9f420e4 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C865B0B9E8C4420EB1E46262A9F420E4 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ C865B0B9E8C4420EB1E46262A9F420E4 == \C\8\6\5\B\0\B\9\E\8\C\4\4\2\0\E\B\1\E\4\6\2\6\2\A\9\F\4\2\0\E\4 ]] 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 40fbecb6-f4cb-48af-82df-bf2a993f468f 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=40fbecb6f4cb48af82dfbf2a993f468f 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 40FBECB6F4CB48AF82DFBF2A993F468F 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 40FBECB6F4CB48AF82DFBF2A993F468F == \4\0\F\B\E\C\B\6\F\4\C\B\4\8\A\F\8\2\D\F\B\F\2\A\9\9\3\F\4\6\8\F ]] 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 0a6a4e5e-4881-488d-93e6-0f3ed04d233d 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@544 -- # tr -d - 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=0a6a4e5e4881488d93e60f3ed04d233d 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 0A6A4E5E4881488D93E60F3ED04D233D 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 0A6A4E5E4881488D93E60F3ED04D233D == \0\A\6\A\4\E\5\E\4\8\8\1\4\8\8\D\9\3\E\6\0\F\3\E\D\0\4\D\2\3\3\D ]] 00:24:33.229 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:24:33.488 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:24:33.488 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:24:33.488 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 1765245 00:24:33.488 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1765245 ']' 00:24:33.488 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1765245 00:24:33.488 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:33.488 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:33.488 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1765245 00:24:33.488 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:33.488 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:33.488 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1765245' 00:24:33.488 killing process with pid 1765245 00:24:33.488 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1765245 00:24:33.488 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1765245 00:24:33.746 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:24:33.746 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:33.746 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@99 -- # sync 00:24:33.746 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:33.746 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@102 -- # set +e 00:24:33.746 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:33.746 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:33.746 rmmod nvme_tcp 00:24:34.004 rmmod nvme_fabrics 00:24:34.004 rmmod nvme_keyring 00:24:34.004 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:34.004 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # set -e 00:24:34.004 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # return 0 00:24:34.004 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # '[' -n 1765224 ']' 00:24:34.004 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@337 -- # killprocess 1765224 00:24:34.004 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 1765224 ']' 00:24:34.004 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 1765224 00:24:34.004 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:24:34.004 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:34.004 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1765224 00:24:34.004 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:34.004 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:34.004 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1765224' 00:24:34.004 killing process with pid 1765224 00:24:34.004 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 1765224 00:24:34.004 08:21:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 1765224 00:24:34.004 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:34.004 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@342 -- # nvmf_fini 00:24:34.004 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@254 -- # local dev 00:24:34.004 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:34.004 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:34.004 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:34.004 08:21:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@121 -- # return 0 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # _dev=0 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # dev_map=() 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@274 -- # iptr 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # iptables-save 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@548 -- # iptables-restore 00:24:36.545 00:24:36.545 real 0m12.513s 00:24:36.545 user 0m9.783s 00:24:36.545 sys 0m5.507s 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:24:36.545 ************************************ 00:24:36.545 END TEST nvmf_nsid 00:24:36.545 ************************************ 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:24:36.545 00:24:36.545 real 12m9.420s 00:24:36.545 user 26m10.750s 00:24:36.545 sys 3m43.847s 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:36.545 08:21:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:36.545 ************************************ 00:24:36.545 END TEST nvmf_target_extra 00:24:36.545 ************************************ 00:24:36.545 08:21:50 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:36.545 08:21:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:36.545 08:21:50 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:36.545 08:21:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:36.545 ************************************ 00:24:36.545 START TEST nvmf_host 00:24:36.545 ************************************ 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:24:36.545 * Looking for test storage... 00:24:36.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:36.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.545 --rc genhtml_branch_coverage=1 00:24:36.545 --rc genhtml_function_coverage=1 00:24:36.545 --rc genhtml_legend=1 00:24:36.545 --rc geninfo_all_blocks=1 00:24:36.545 --rc geninfo_unexecuted_blocks=1 00:24:36.545 00:24:36.545 ' 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:36.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.545 --rc genhtml_branch_coverage=1 00:24:36.545 --rc genhtml_function_coverage=1 00:24:36.545 --rc genhtml_legend=1 00:24:36.545 --rc geninfo_all_blocks=1 00:24:36.545 --rc geninfo_unexecuted_blocks=1 00:24:36.545 00:24:36.545 ' 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:36.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.545 --rc genhtml_branch_coverage=1 00:24:36.545 --rc genhtml_function_coverage=1 00:24:36.545 --rc genhtml_legend=1 00:24:36.545 --rc geninfo_all_blocks=1 00:24:36.545 --rc geninfo_unexecuted_blocks=1 00:24:36.545 00:24:36.545 ' 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:36.545 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.545 --rc genhtml_branch_coverage=1 00:24:36.545 --rc genhtml_function_coverage=1 00:24:36.545 --rc genhtml_legend=1 00:24:36.545 --rc geninfo_all_blocks=1 00:24:36.545 --rc geninfo_unexecuted_blocks=1 00:24:36.545 00:24:36.545 ' 00:24:36.545 08:21:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@50 -- # : 0 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:36.546 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:36.546 ************************************ 00:24:36.546 START TEST nvmf_multicontroller 00:24:36.546 ************************************ 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:24:36.546 * Looking for test storage... 00:24:36.546 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:24:36.546 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:36.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.806 --rc genhtml_branch_coverage=1 00:24:36.806 --rc genhtml_function_coverage=1 00:24:36.806 --rc genhtml_legend=1 00:24:36.806 --rc geninfo_all_blocks=1 00:24:36.806 --rc geninfo_unexecuted_blocks=1 00:24:36.806 00:24:36.806 ' 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:36.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.806 --rc genhtml_branch_coverage=1 00:24:36.806 --rc genhtml_function_coverage=1 00:24:36.806 --rc genhtml_legend=1 00:24:36.806 --rc geninfo_all_blocks=1 00:24:36.806 --rc geninfo_unexecuted_blocks=1 00:24:36.806 00:24:36.806 ' 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:36.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.806 --rc genhtml_branch_coverage=1 00:24:36.806 --rc genhtml_function_coverage=1 00:24:36.806 --rc genhtml_legend=1 00:24:36.806 --rc geninfo_all_blocks=1 00:24:36.806 --rc geninfo_unexecuted_blocks=1 00:24:36.806 00:24:36.806 ' 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:36.806 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:36.806 --rc genhtml_branch_coverage=1 00:24:36.806 --rc genhtml_function_coverage=1 00:24:36.806 --rc genhtml_legend=1 00:24:36.806 --rc geninfo_all_blocks=1 00:24:36.806 --rc geninfo_unexecuted_blocks=1 00:24:36.806 00:24:36.806 ' 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:36.806 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@50 -- # : 0 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:36.807 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # remove_target_ns 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # xtrace_disable 00:24:36.807 08:21:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@131 -- # pci_devs=() 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@135 -- # net_devs=() 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@136 -- # e810=() 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@136 -- # local -ga e810 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@137 -- # x722=() 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@137 -- # local -ga x722 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@138 -- # mlx=() 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@138 -- # local -ga mlx 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:43.378 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:43.379 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:43.379 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:43.379 Found net devices under 0000:86:00.0: cvl_0_0 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:43.379 Found net devices under 0000:86:00.1: cvl_0_1 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # is_hw=yes 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@247 -- # create_target_ns 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@28 -- # local -g _dev 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # ips=() 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772161 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:43.379 10.0.0.1 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772162 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:43.379 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:43.380 10.0.0.2 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:43.380 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:43.380 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:24:43.380 00:24:43.380 --- 10.0.0.1 ping statistics --- 00:24:43.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.380 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=target0 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:24:43.380 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:43.380 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:24:43.380 00:24:43.380 --- 10.0.0.2 ping statistics --- 00:24:43.380 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:43.380 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # return 0 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:43.380 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # return 1 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev= 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@160 -- # return 0 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=target0 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # local dev=target1 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # return 1 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@159 -- # dev= 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@160 -- # return 0 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:24:43.381 ' 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # nvmfpid=1769587 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # waitforlisten 1769587 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1769587 ']' 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:43.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.381 [2024-11-20 08:21:56.792720] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:24:43.381 [2024-11-20 08:21:56.792765] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:43.381 [2024-11-20 08:21:56.865130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:43.381 [2024-11-20 08:21:56.906673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:43.381 [2024-11-20 08:21:56.906708] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:43.381 [2024-11-20 08:21:56.906716] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:43.381 [2024-11-20 08:21:56.906722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:43.381 [2024-11-20 08:21:56.906727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:43.381 [2024-11-20 08:21:56.908085] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:43.381 [2024-11-20 08:21:56.908194] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:43.381 [2024-11-20 08:21:56.908196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:43.381 08:21:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.381 [2024-11-20 08:21:57.042842] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.381 Malloc0 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.381 [2024-11-20 08:21:57.104163] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:43.381 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.382 [2024-11-20 08:21:57.112086] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.382 Malloc1 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=1769611 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 1769611 /var/tmp/bdevperf.sock 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 1769611 ']' 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:43.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:43.382 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.641 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:43.641 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:24:43.641 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:43.641 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.641 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.641 NVMe0n1 00:24:43.641 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.641 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:43.641 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:24:43.641 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.641 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.901 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.901 1 00:24:43.901 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:43.901 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:43.901 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:43.901 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:43.901 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.901 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:43.901 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.901 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:24:43.901 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.901 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.901 request: 00:24:43.901 { 00:24:43.901 "name": "NVMe0", 00:24:43.901 "trtype": "tcp", 00:24:43.901 "traddr": "10.0.0.2", 00:24:43.901 "adrfam": "ipv4", 00:24:43.901 "trsvcid": "4420", 00:24:43.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.901 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:24:43.901 "hostaddr": "10.0.0.1", 00:24:43.901 "prchk_reftag": false, 00:24:43.901 "prchk_guard": false, 00:24:43.901 "hdgst": false, 00:24:43.901 "ddgst": false, 00:24:43.901 "allow_unrecognized_csi": false, 00:24:43.901 "method": "bdev_nvme_attach_controller", 00:24:43.901 "req_id": 1 00:24:43.901 } 00:24:43.901 Got JSON-RPC error response 00:24:43.901 response: 00:24:43.901 { 00:24:43.901 "code": -114, 00:24:43.901 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:43.901 } 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.902 request: 00:24:43.902 { 00:24:43.902 "name": "NVMe0", 00:24:43.902 "trtype": "tcp", 00:24:43.902 "traddr": "10.0.0.2", 00:24:43.902 "adrfam": "ipv4", 00:24:43.902 "trsvcid": "4420", 00:24:43.902 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:24:43.902 "hostaddr": "10.0.0.1", 00:24:43.902 "prchk_reftag": false, 00:24:43.902 "prchk_guard": false, 00:24:43.902 "hdgst": false, 00:24:43.902 "ddgst": false, 00:24:43.902 "allow_unrecognized_csi": false, 00:24:43.902 "method": "bdev_nvme_attach_controller", 00:24:43.902 "req_id": 1 00:24:43.902 } 00:24:43.902 Got JSON-RPC error response 00:24:43.902 response: 00:24:43.902 { 00:24:43.902 "code": -114, 00:24:43.902 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:43.902 } 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.902 request: 00:24:43.902 { 00:24:43.902 "name": "NVMe0", 00:24:43.902 "trtype": "tcp", 00:24:43.902 "traddr": "10.0.0.2", 00:24:43.902 "adrfam": "ipv4", 00:24:43.902 "trsvcid": "4420", 00:24:43.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.902 "hostaddr": "10.0.0.1", 00:24:43.902 "prchk_reftag": false, 00:24:43.902 "prchk_guard": false, 00:24:43.902 "hdgst": false, 00:24:43.902 "ddgst": false, 00:24:43.902 "multipath": "disable", 00:24:43.902 "allow_unrecognized_csi": false, 00:24:43.902 "method": "bdev_nvme_attach_controller", 00:24:43.902 "req_id": 1 00:24:43.902 } 00:24:43.902 Got JSON-RPC error response 00:24:43.902 response: 00:24:43.902 { 00:24:43.902 "code": -114, 00:24:43.902 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:24:43.902 } 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:43.902 request: 00:24:43.902 { 00:24:43.902 "name": "NVMe0", 00:24:43.902 "trtype": "tcp", 00:24:43.902 "traddr": "10.0.0.2", 00:24:43.902 "adrfam": "ipv4", 00:24:43.902 "trsvcid": "4420", 00:24:43.902 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:43.902 "hostaddr": "10.0.0.1", 00:24:43.902 "prchk_reftag": false, 00:24:43.902 "prchk_guard": false, 00:24:43.902 "hdgst": false, 00:24:43.902 "ddgst": false, 00:24:43.902 "multipath": "failover", 00:24:43.902 "allow_unrecognized_csi": false, 00:24:43.902 "method": "bdev_nvme_attach_controller", 00:24:43.902 "req_id": 1 00:24:43.902 } 00:24:43.902 Got JSON-RPC error response 00:24:43.902 response: 00:24:43.902 { 00:24:43.902 "code": -114, 00:24:43.902 "message": "A controller named NVMe0 already exists with the specified network path" 00:24:43.902 } 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.902 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:44.162 NVMe0n1 00:24:44.162 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.162 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:24:44.162 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.162 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:44.162 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.162 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:24:44.162 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.162 08:21:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:44.162 00:24:44.162 08:21:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.162 08:21:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:24:44.162 08:21:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:24:44.162 08:21:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.162 08:21:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:44.162 08:21:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.162 08:21:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:24:44.162 08:21:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:45.540 { 00:24:45.540 "results": [ 00:24:45.540 { 00:24:45.540 "job": "NVMe0n1", 00:24:45.540 "core_mask": "0x1", 00:24:45.540 "workload": "write", 00:24:45.540 "status": "finished", 00:24:45.540 "queue_depth": 128, 00:24:45.540 "io_size": 4096, 00:24:45.540 "runtime": 1.004956, 00:24:45.540 "iops": 24646.85021035747, 00:24:45.540 "mibps": 96.27675863420886, 00:24:45.540 "io_failed": 0, 00:24:45.540 "io_timeout": 0, 00:24:45.540 "avg_latency_us": 5182.578952415558, 00:24:45.540 "min_latency_us": 3089.554285714286, 00:24:45.540 "max_latency_us": 14417.92 00:24:45.540 } 00:24:45.540 ], 00:24:45.540 "core_count": 1 00:24:45.540 } 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 1769611 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1769611 ']' 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1769611 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1769611 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1769611' 00:24:45.540 killing process with pid 1769611 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1769611 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1769611 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:24:45.540 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:24:45.540 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:45.541 [2024-11-20 08:21:57.217883] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:24:45.541 [2024-11-20 08:21:57.217934] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1769611 ] 00:24:45.541 [2024-11-20 08:21:57.291262] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:45.541 [2024-11-20 08:21:57.333408] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:45.541 [2024-11-20 08:21:58.150406] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name d1256a70-252c-4154-af03-5147284227c5 already exists 00:24:45.541 [2024-11-20 08:21:58.150434] bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:d1256a70-252c-4154-af03-5147284227c5 alias for bdev NVMe1n1 00:24:45.541 [2024-11-20 08:21:58.150442] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:24:45.541 Running I/O for 1 seconds... 00:24:45.541 24577.00 IOPS, 96.00 MiB/s 00:24:45.541 Latency(us) 00:24:45.541 [2024-11-20T07:21:59.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.541 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:24:45.541 NVMe0n1 : 1.00 24646.85 96.28 0.00 0.00 5182.58 3089.55 14417.92 00:24:45.541 [2024-11-20T07:21:59.569Z] =================================================================================================================== 00:24:45.541 [2024-11-20T07:21:59.569Z] Total : 24646.85 96.28 0.00 0.00 5182.58 3089.55 14417.92 00:24:45.541 Received shutdown signal, test time was about 1.000000 seconds 00:24:45.541 00:24:45.541 Latency(us) 00:24:45.541 [2024-11-20T07:21:59.569Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:45.541 [2024-11-20T07:21:59.569Z] =================================================================================================================== 00:24:45.541 [2024-11-20T07:21:59.569Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:45.541 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:24:45.541 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:45.541 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:24:45.541 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:24:45.541 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:45.541 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@99 -- # sync 00:24:45.541 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:45.541 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@102 -- # set +e 00:24:45.541 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:45.541 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:45.800 rmmod nvme_tcp 00:24:45.800 rmmod nvme_fabrics 00:24:45.800 rmmod nvme_keyring 00:24:45.800 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:45.800 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@106 -- # set -e 00:24:45.800 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@107 -- # return 0 00:24:45.800 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # '[' -n 1769587 ']' 00:24:45.800 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@337 -- # killprocess 1769587 00:24:45.800 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 1769587 ']' 00:24:45.800 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 1769587 00:24:45.800 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:24:45.800 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:45.800 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1769587 00:24:45.800 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:45.800 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:45.800 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1769587' 00:24:45.800 killing process with pid 1769587 00:24:45.800 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 1769587 00:24:45.800 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 1769587 00:24:46.060 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:46.060 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # nvmf_fini 00:24:46.060 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@254 -- # local dev 00:24:46.060 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:46.060 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:46.060 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:46.060 08:21:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@121 -- # return 0 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@41 -- # _dev=0 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@41 -- # dev_map=() 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@274 -- # iptr 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@548 -- # iptables-save 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@548 -- # iptables-restore 00:24:47.967 00:24:47.967 real 0m11.484s 00:24:47.967 user 0m13.119s 00:24:47.967 sys 0m5.232s 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:24:47.967 ************************************ 00:24:47.967 END TEST nvmf_multicontroller 00:24:47.967 ************************************ 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:47.967 08:22:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.227 ************************************ 00:24:48.227 START TEST nvmf_aer 00:24:48.227 ************************************ 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:24:48.227 * Looking for test storage... 00:24:48.227 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:48.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.227 --rc genhtml_branch_coverage=1 00:24:48.227 --rc genhtml_function_coverage=1 00:24:48.227 --rc genhtml_legend=1 00:24:48.227 --rc geninfo_all_blocks=1 00:24:48.227 --rc geninfo_unexecuted_blocks=1 00:24:48.227 00:24:48.227 ' 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:48.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.227 --rc genhtml_branch_coverage=1 00:24:48.227 --rc genhtml_function_coverage=1 00:24:48.227 --rc genhtml_legend=1 00:24:48.227 --rc geninfo_all_blocks=1 00:24:48.227 --rc geninfo_unexecuted_blocks=1 00:24:48.227 00:24:48.227 ' 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:48.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.227 --rc genhtml_branch_coverage=1 00:24:48.227 --rc genhtml_function_coverage=1 00:24:48.227 --rc genhtml_legend=1 00:24:48.227 --rc geninfo_all_blocks=1 00:24:48.227 --rc geninfo_unexecuted_blocks=1 00:24:48.227 00:24:48.227 ' 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:48.227 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:48.227 --rc genhtml_branch_coverage=1 00:24:48.227 --rc genhtml_function_coverage=1 00:24:48.227 --rc genhtml_legend=1 00:24:48.227 --rc geninfo_all_blocks=1 00:24:48.227 --rc geninfo_unexecuted_blocks=1 00:24:48.227 00:24:48.227 ' 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:48.227 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@50 -- # : 0 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:48.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # remove_target_ns 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # xtrace_disable 00:24:48.228 08:22:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@131 -- # pci_devs=() 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@135 -- # net_devs=() 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@136 -- # e810=() 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@136 -- # local -ga e810 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@137 -- # x722=() 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@137 -- # local -ga x722 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@138 -- # mlx=() 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@138 -- # local -ga mlx 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:54.801 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:54.801 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:54.801 Found net devices under 0000:86:00.0: cvl_0_0 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:54.801 Found net devices under 0000:86:00.1: cvl_0_1 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # is_hw=yes 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:24:54.801 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@247 -- # create_target_ns 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@28 -- # local -g _dev 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # ips=() 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:54.802 08:22:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772161 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:54.802 10.0.0.1 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772162 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:54.802 10.0.0.2 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:54.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:54.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:24:54.802 00:24:54.802 --- 10.0.0.1 ping statistics --- 00:24:54.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.802 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=target0 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:24:54.802 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:24:54.803 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:54.803 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:24:54.803 00:24:54.803 --- 10.0.0.2 ping statistics --- 00:24:54.803 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:54.803 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair++ )) 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # return 0 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=initiator0 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=initiator1 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # return 1 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev= 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@160 -- # return 0 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev target0 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=target0 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # get_net_dev target1 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # local dev=target1 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # return 1 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@159 -- # dev= 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@160 -- # return 0 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:24:54.803 ' 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # nvmfpid=1773625 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # waitforlisten 1773625 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 1773625 ']' 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:54.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.803 [2024-11-20 08:22:08.364635] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:24:54.803 [2024-11-20 08:22:08.364679] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:54.803 [2024-11-20 08:22:08.442633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:54.803 [2024-11-20 08:22:08.484643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:54.803 [2024-11-20 08:22:08.484681] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:54.803 [2024-11-20 08:22:08.484688] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:54.803 [2024-11-20 08:22:08.484694] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:54.803 [2024-11-20 08:22:08.484699] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:54.803 [2024-11-20 08:22:08.486276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:54.803 [2024-11-20 08:22:08.486383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:54.803 [2024-11-20 08:22:08.486492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:54.803 [2024-11-20 08:22:08.486494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:54.803 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.804 [2024-11-20 08:22:08.622113] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.804 Malloc0 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.804 [2024-11-20 08:22:08.681300] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:54.804 [ 00:24:54.804 { 00:24:54.804 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:54.804 "subtype": "Discovery", 00:24:54.804 "listen_addresses": [], 00:24:54.804 "allow_any_host": true, 00:24:54.804 "hosts": [] 00:24:54.804 }, 00:24:54.804 { 00:24:54.804 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:54.804 "subtype": "NVMe", 00:24:54.804 "listen_addresses": [ 00:24:54.804 { 00:24:54.804 "trtype": "TCP", 00:24:54.804 "adrfam": "IPv4", 00:24:54.804 "traddr": "10.0.0.2", 00:24:54.804 "trsvcid": "4420" 00:24:54.804 } 00:24:54.804 ], 00:24:54.804 "allow_any_host": true, 00:24:54.804 "hosts": [], 00:24:54.804 "serial_number": "SPDK00000000000001", 00:24:54.804 "model_number": "SPDK bdev Controller", 00:24:54.804 "max_namespaces": 2, 00:24:54.804 "min_cntlid": 1, 00:24:54.804 "max_cntlid": 65519, 00:24:54.804 "namespaces": [ 00:24:54.804 { 00:24:54.804 "nsid": 1, 00:24:54.804 "bdev_name": "Malloc0", 00:24:54.804 "name": "Malloc0", 00:24:54.804 "nguid": "63ECB471B2D0410085B246967246C916", 00:24:54.804 "uuid": "63ecb471-b2d0-4100-85b2-46967246c916" 00:24:54.804 } 00:24:54.804 ] 00:24:54.804 } 00:24:54.804 ] 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=1773648 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:24:54.804 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:24:55.064 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:55.064 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:24:55.064 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:24:55.064 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:24:55.064 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.064 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:55.064 Malloc1 00:24:55.064 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.064 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:24:55.064 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.064 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:55.064 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.064 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:24:55.064 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.064 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:55.064 Asynchronous Event Request test 00:24:55.064 Attaching to 10.0.0.2 00:24:55.064 Attached to 10.0.0.2 00:24:55.064 Registering asynchronous event callbacks... 00:24:55.064 Starting namespace attribute notice tests for all controllers... 00:24:55.064 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:24:55.064 aer_cb - Changed Namespace 00:24:55.064 Cleaning up... 00:24:55.064 [ 00:24:55.064 { 00:24:55.064 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:24:55.064 "subtype": "Discovery", 00:24:55.064 "listen_addresses": [], 00:24:55.064 "allow_any_host": true, 00:24:55.064 "hosts": [] 00:24:55.064 }, 00:24:55.064 { 00:24:55.064 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:24:55.064 "subtype": "NVMe", 00:24:55.064 "listen_addresses": [ 00:24:55.064 { 00:24:55.064 "trtype": "TCP", 00:24:55.064 "adrfam": "IPv4", 00:24:55.064 "traddr": "10.0.0.2", 00:24:55.064 "trsvcid": "4420" 00:24:55.064 } 00:24:55.064 ], 00:24:55.064 "allow_any_host": true, 00:24:55.064 "hosts": [], 00:24:55.064 "serial_number": "SPDK00000000000001", 00:24:55.064 "model_number": "SPDK bdev Controller", 00:24:55.064 "max_namespaces": 2, 00:24:55.064 "min_cntlid": 1, 00:24:55.064 "max_cntlid": 65519, 00:24:55.064 "namespaces": [ 00:24:55.064 { 00:24:55.064 "nsid": 1, 00:24:55.064 "bdev_name": "Malloc0", 00:24:55.064 "name": "Malloc0", 00:24:55.064 "nguid": "63ECB471B2D0410085B246967246C916", 00:24:55.064 "uuid": "63ecb471-b2d0-4100-85b2-46967246c916" 00:24:55.064 }, 00:24:55.064 { 00:24:55.064 "nsid": 2, 00:24:55.064 "bdev_name": "Malloc1", 00:24:55.064 "name": "Malloc1", 00:24:55.064 "nguid": "64E84BE50C45440FB244711EE4C1D1E9", 00:24:55.064 "uuid": "64e84be5-0c45-440f-b244-711ee4c1d1e9" 00:24:55.064 } 00:24:55.064 ] 00:24:55.064 } 00:24:55.064 ] 00:24:55.064 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.064 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 1773648 00:24:55.064 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:24:55.064 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.064 08:22:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:55.064 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.064 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:24:55.064 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.064 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:55.064 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.064 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:55.064 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.064 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:55.064 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.064 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:24:55.064 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:24:55.064 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:55.064 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@99 -- # sync 00:24:55.064 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:55.064 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # set +e 00:24:55.064 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:55.064 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:55.064 rmmod nvme_tcp 00:24:55.064 rmmod nvme_fabrics 00:24:55.064 rmmod nvme_keyring 00:24:55.323 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:55.323 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # set -e 00:24:55.324 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # return 0 00:24:55.324 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # '[' -n 1773625 ']' 00:24:55.324 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@337 -- # killprocess 1773625 00:24:55.324 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 1773625 ']' 00:24:55.324 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 1773625 00:24:55.324 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:24:55.324 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:55.324 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1773625 00:24:55.324 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:55.324 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:55.324 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1773625' 00:24:55.324 killing process with pid 1773625 00:24:55.324 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 1773625 00:24:55.324 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 1773625 00:24:55.324 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:55.324 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # nvmf_fini 00:24:55.324 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@254 -- # local dev 00:24:55.324 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@257 -- # remove_target_ns 00:24:55.324 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:55.324 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:55.324 08:22:09 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:57.862 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@258 -- # delete_main_bridge 00:24:57.862 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:57.862 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@121 -- # return 0 00:24:57.862 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:57.862 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:57.862 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:24:57.862 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:24:57.862 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:24:57.862 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:24:57.862 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:24:57.862 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:24:57.862 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:24:57.862 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:57.862 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:24:57.862 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # _dev=0 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # dev_map=() 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@274 -- # iptr 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@548 -- # iptables-save 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@548 -- # iptables-restore 00:24:57.863 00:24:57.863 real 0m9.378s 00:24:57.863 user 0m5.152s 00:24:57.863 sys 0m4.940s 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:24:57.863 ************************************ 00:24:57.863 END TEST nvmf_aer 00:24:57.863 ************************************ 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.863 ************************************ 00:24:57.863 START TEST nvmf_async_init 00:24:57.863 ************************************ 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:24:57.863 * Looking for test storage... 00:24:57.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:57.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.863 --rc genhtml_branch_coverage=1 00:24:57.863 --rc genhtml_function_coverage=1 00:24:57.863 --rc genhtml_legend=1 00:24:57.863 --rc geninfo_all_blocks=1 00:24:57.863 --rc geninfo_unexecuted_blocks=1 00:24:57.863 00:24:57.863 ' 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:57.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.863 --rc genhtml_branch_coverage=1 00:24:57.863 --rc genhtml_function_coverage=1 00:24:57.863 --rc genhtml_legend=1 00:24:57.863 --rc geninfo_all_blocks=1 00:24:57.863 --rc geninfo_unexecuted_blocks=1 00:24:57.863 00:24:57.863 ' 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:57.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.863 --rc genhtml_branch_coverage=1 00:24:57.863 --rc genhtml_function_coverage=1 00:24:57.863 --rc genhtml_legend=1 00:24:57.863 --rc geninfo_all_blocks=1 00:24:57.863 --rc geninfo_unexecuted_blocks=1 00:24:57.863 00:24:57.863 ' 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:57.863 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:57.863 --rc genhtml_branch_coverage=1 00:24:57.863 --rc genhtml_function_coverage=1 00:24:57.863 --rc genhtml_legend=1 00:24:57.863 --rc geninfo_all_blocks=1 00:24:57.863 --rc geninfo_unexecuted_blocks=1 00:24:57.863 00:24:57.863 ' 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.863 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@50 -- # : 0 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:57.864 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=dc360da00f8a4779b76690f5e4140751 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # remove_target_ns 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # xtrace_disable 00:24:57.864 08:22:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@131 -- # pci_devs=() 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@131 -- # local -a pci_devs 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@132 -- # pci_net_devs=() 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@133 -- # pci_drivers=() 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@133 -- # local -A pci_drivers 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@135 -- # net_devs=() 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@135 -- # local -ga net_devs 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@136 -- # e810=() 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@136 -- # local -ga e810 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@137 -- # x722=() 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@137 -- # local -ga x722 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@138 -- # mlx=() 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@138 -- # local -ga mlx 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:04.437 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:04.437 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:04.437 Found net devices under 0000:86:00.0: cvl_0_0 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:04.437 Found net devices under 0000:86:00.1: cvl_0_1 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # is_hw=yes 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@247 -- # create_target_ns 00:25:04.437 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@28 -- # local -g _dev 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # ips=() 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772161 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:25:04.438 10.0.0.1 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772162 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:25:04.438 10.0.0.2 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@38 -- # ping_ips 1 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:04.438 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:04.438 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.370 ms 00:25:04.438 00:25:04.438 --- 10.0.0.1 ping statistics --- 00:25:04.438 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.438 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:04.438 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=target0 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:04.439 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:04.439 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:25:04.439 00:25:04.439 --- 10.0.0.2 ping statistics --- 00:25:04.439 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:04.439 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # return 0 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # return 1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev= 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@160 -- # return 0 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=target0 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # local dev=target1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # return 1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@159 -- # dev= 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@160 -- # return 0 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:25:04.439 ' 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # nvmfpid=1777305 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # waitforlisten 1777305 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 1777305 ']' 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:04.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:04.439 08:22:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.439 [2024-11-20 08:22:17.847005] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:25:04.439 [2024-11-20 08:22:17.847049] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:04.439 [2024-11-20 08:22:17.924846] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:04.439 [2024-11-20 08:22:17.965326] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:04.439 [2024-11-20 08:22:17.965367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:04.439 [2024-11-20 08:22:17.965374] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:04.440 [2024-11-20 08:22:17.965380] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:04.440 [2024-11-20 08:22:17.965385] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:04.440 [2024-11-20 08:22:17.965944] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.440 [2024-11-20 08:22:18.099626] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.440 null0 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g dc360da00f8a4779b76690f5e4140751 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.440 [2024-11-20 08:22:18.151893] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.440 nvme0n1 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.440 [ 00:25:04.440 { 00:25:04.440 "name": "nvme0n1", 00:25:04.440 "aliases": [ 00:25:04.440 "dc360da0-0f8a-4779-b766-90f5e4140751" 00:25:04.440 ], 00:25:04.440 "product_name": "NVMe disk", 00:25:04.440 "block_size": 512, 00:25:04.440 "num_blocks": 2097152, 00:25:04.440 "uuid": "dc360da0-0f8a-4779-b766-90f5e4140751", 00:25:04.440 "numa_id": 1, 00:25:04.440 "assigned_rate_limits": { 00:25:04.440 "rw_ios_per_sec": 0, 00:25:04.440 "rw_mbytes_per_sec": 0, 00:25:04.440 "r_mbytes_per_sec": 0, 00:25:04.440 "w_mbytes_per_sec": 0 00:25:04.440 }, 00:25:04.440 "claimed": false, 00:25:04.440 "zoned": false, 00:25:04.440 "supported_io_types": { 00:25:04.440 "read": true, 00:25:04.440 "write": true, 00:25:04.440 "unmap": false, 00:25:04.440 "flush": true, 00:25:04.440 "reset": true, 00:25:04.440 "nvme_admin": true, 00:25:04.440 "nvme_io": true, 00:25:04.440 "nvme_io_md": false, 00:25:04.440 "write_zeroes": true, 00:25:04.440 "zcopy": false, 00:25:04.440 "get_zone_info": false, 00:25:04.440 "zone_management": false, 00:25:04.440 "zone_append": false, 00:25:04.440 "compare": true, 00:25:04.440 "compare_and_write": true, 00:25:04.440 "abort": true, 00:25:04.440 "seek_hole": false, 00:25:04.440 "seek_data": false, 00:25:04.440 "copy": true, 00:25:04.440 "nvme_iov_md": false 00:25:04.440 }, 00:25:04.440 "memory_domains": [ 00:25:04.440 { 00:25:04.440 "dma_device_id": "system", 00:25:04.440 "dma_device_type": 1 00:25:04.440 } 00:25:04.440 ], 00:25:04.440 "driver_specific": { 00:25:04.440 "nvme": [ 00:25:04.440 { 00:25:04.440 "trid": { 00:25:04.440 "trtype": "TCP", 00:25:04.440 "adrfam": "IPv4", 00:25:04.440 "traddr": "10.0.0.2", 00:25:04.440 "trsvcid": "4420", 00:25:04.440 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:04.440 }, 00:25:04.440 "ctrlr_data": { 00:25:04.440 "cntlid": 1, 00:25:04.440 "vendor_id": "0x8086", 00:25:04.440 "model_number": "SPDK bdev Controller", 00:25:04.440 "serial_number": "00000000000000000000", 00:25:04.440 "firmware_revision": "25.01", 00:25:04.440 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:04.440 "oacs": { 00:25:04.440 "security": 0, 00:25:04.440 "format": 0, 00:25:04.440 "firmware": 0, 00:25:04.440 "ns_manage": 0 00:25:04.440 }, 00:25:04.440 "multi_ctrlr": true, 00:25:04.440 "ana_reporting": false 00:25:04.440 }, 00:25:04.440 "vs": { 00:25:04.440 "nvme_version": "1.3" 00:25:04.440 }, 00:25:04.440 "ns_data": { 00:25:04.440 "id": 1, 00:25:04.440 "can_share": true 00:25:04.440 } 00:25:04.440 } 00:25:04.440 ], 00:25:04.440 "mp_policy": "active_passive" 00:25:04.440 } 00:25:04.440 } 00:25:04.440 ] 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.440 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.440 [2024-11-20 08:22:18.416417] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:25:04.440 [2024-11-20 08:22:18.416471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1daa220 (9): Bad file descriptor 00:25:04.700 [2024-11-20 08:22:18.549291] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.700 [ 00:25:04.700 { 00:25:04.700 "name": "nvme0n1", 00:25:04.700 "aliases": [ 00:25:04.700 "dc360da0-0f8a-4779-b766-90f5e4140751" 00:25:04.700 ], 00:25:04.700 "product_name": "NVMe disk", 00:25:04.700 "block_size": 512, 00:25:04.700 "num_blocks": 2097152, 00:25:04.700 "uuid": "dc360da0-0f8a-4779-b766-90f5e4140751", 00:25:04.700 "numa_id": 1, 00:25:04.700 "assigned_rate_limits": { 00:25:04.700 "rw_ios_per_sec": 0, 00:25:04.700 "rw_mbytes_per_sec": 0, 00:25:04.700 "r_mbytes_per_sec": 0, 00:25:04.700 "w_mbytes_per_sec": 0 00:25:04.700 }, 00:25:04.700 "claimed": false, 00:25:04.700 "zoned": false, 00:25:04.700 "supported_io_types": { 00:25:04.700 "read": true, 00:25:04.700 "write": true, 00:25:04.700 "unmap": false, 00:25:04.700 "flush": true, 00:25:04.700 "reset": true, 00:25:04.700 "nvme_admin": true, 00:25:04.700 "nvme_io": true, 00:25:04.700 "nvme_io_md": false, 00:25:04.700 "write_zeroes": true, 00:25:04.700 "zcopy": false, 00:25:04.700 "get_zone_info": false, 00:25:04.700 "zone_management": false, 00:25:04.700 "zone_append": false, 00:25:04.700 "compare": true, 00:25:04.700 "compare_and_write": true, 00:25:04.700 "abort": true, 00:25:04.700 "seek_hole": false, 00:25:04.700 "seek_data": false, 00:25:04.700 "copy": true, 00:25:04.700 "nvme_iov_md": false 00:25:04.700 }, 00:25:04.700 "memory_domains": [ 00:25:04.700 { 00:25:04.700 "dma_device_id": "system", 00:25:04.700 "dma_device_type": 1 00:25:04.700 } 00:25:04.700 ], 00:25:04.700 "driver_specific": { 00:25:04.700 "nvme": [ 00:25:04.700 { 00:25:04.700 "trid": { 00:25:04.700 "trtype": "TCP", 00:25:04.700 "adrfam": "IPv4", 00:25:04.700 "traddr": "10.0.0.2", 00:25:04.700 "trsvcid": "4420", 00:25:04.700 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:04.700 }, 00:25:04.700 "ctrlr_data": { 00:25:04.700 "cntlid": 2, 00:25:04.700 "vendor_id": "0x8086", 00:25:04.700 "model_number": "SPDK bdev Controller", 00:25:04.700 "serial_number": "00000000000000000000", 00:25:04.700 "firmware_revision": "25.01", 00:25:04.700 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:04.700 "oacs": { 00:25:04.700 "security": 0, 00:25:04.700 "format": 0, 00:25:04.700 "firmware": 0, 00:25:04.700 "ns_manage": 0 00:25:04.700 }, 00:25:04.700 "multi_ctrlr": true, 00:25:04.700 "ana_reporting": false 00:25:04.700 }, 00:25:04.700 "vs": { 00:25:04.700 "nvme_version": "1.3" 00:25:04.700 }, 00:25:04.700 "ns_data": { 00:25:04.700 "id": 1, 00:25:04.700 "can_share": true 00:25:04.700 } 00:25:04.700 } 00:25:04.700 ], 00:25:04.700 "mp_policy": "active_passive" 00:25:04.700 } 00:25:04.700 } 00:25:04.700 ] 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.AqbOTRbqgO 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.AqbOTRbqgO 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.AqbOTRbqgO 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.700 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.701 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:25:04.701 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.701 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.701 [2024-11-20 08:22:18.625040] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:25:04.701 [2024-11-20 08:22:18.625143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:25:04.701 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.701 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:25:04.701 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.701 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.701 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.701 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:25:04.701 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.701 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.701 [2024-11-20 08:22:18.645106] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:25:04.701 nvme0n1 00:25:04.701 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.701 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:25:04.701 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.701 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.701 [ 00:25:04.701 { 00:25:04.701 "name": "nvme0n1", 00:25:04.701 "aliases": [ 00:25:04.701 "dc360da0-0f8a-4779-b766-90f5e4140751" 00:25:04.701 ], 00:25:04.960 "product_name": "NVMe disk", 00:25:04.960 "block_size": 512, 00:25:04.960 "num_blocks": 2097152, 00:25:04.960 "uuid": "dc360da0-0f8a-4779-b766-90f5e4140751", 00:25:04.960 "numa_id": 1, 00:25:04.960 "assigned_rate_limits": { 00:25:04.960 "rw_ios_per_sec": 0, 00:25:04.960 "rw_mbytes_per_sec": 0, 00:25:04.960 "r_mbytes_per_sec": 0, 00:25:04.960 "w_mbytes_per_sec": 0 00:25:04.960 }, 00:25:04.960 "claimed": false, 00:25:04.960 "zoned": false, 00:25:04.960 "supported_io_types": { 00:25:04.960 "read": true, 00:25:04.960 "write": true, 00:25:04.960 "unmap": false, 00:25:04.960 "flush": true, 00:25:04.960 "reset": true, 00:25:04.960 "nvme_admin": true, 00:25:04.960 "nvme_io": true, 00:25:04.960 "nvme_io_md": false, 00:25:04.960 "write_zeroes": true, 00:25:04.960 "zcopy": false, 00:25:04.960 "get_zone_info": false, 00:25:04.960 "zone_management": false, 00:25:04.960 "zone_append": false, 00:25:04.960 "compare": true, 00:25:04.960 "compare_and_write": true, 00:25:04.960 "abort": true, 00:25:04.960 "seek_hole": false, 00:25:04.960 "seek_data": false, 00:25:04.960 "copy": true, 00:25:04.960 "nvme_iov_md": false 00:25:04.960 }, 00:25:04.960 "memory_domains": [ 00:25:04.960 { 00:25:04.960 "dma_device_id": "system", 00:25:04.960 "dma_device_type": 1 00:25:04.960 } 00:25:04.960 ], 00:25:04.960 "driver_specific": { 00:25:04.960 "nvme": [ 00:25:04.960 { 00:25:04.960 "trid": { 00:25:04.960 "trtype": "TCP", 00:25:04.960 "adrfam": "IPv4", 00:25:04.960 "traddr": "10.0.0.2", 00:25:04.960 "trsvcid": "4421", 00:25:04.960 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:25:04.960 }, 00:25:04.960 "ctrlr_data": { 00:25:04.960 "cntlid": 3, 00:25:04.960 "vendor_id": "0x8086", 00:25:04.960 "model_number": "SPDK bdev Controller", 00:25:04.960 "serial_number": "00000000000000000000", 00:25:04.960 "firmware_revision": "25.01", 00:25:04.960 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:04.960 "oacs": { 00:25:04.960 "security": 0, 00:25:04.960 "format": 0, 00:25:04.960 "firmware": 0, 00:25:04.960 "ns_manage": 0 00:25:04.960 }, 00:25:04.960 "multi_ctrlr": true, 00:25:04.960 "ana_reporting": false 00:25:04.960 }, 00:25:04.960 "vs": { 00:25:04.960 "nvme_version": "1.3" 00:25:04.960 }, 00:25:04.960 "ns_data": { 00:25:04.960 "id": 1, 00:25:04.960 "can_share": true 00:25:04.960 } 00:25:04.960 } 00:25:04.960 ], 00:25:04.960 "mp_policy": "active_passive" 00:25:04.960 } 00:25:04.960 } 00:25:04.960 ] 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.AqbOTRbqgO 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@99 -- # sync 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # set +e 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:04.960 rmmod nvme_tcp 00:25:04.960 rmmod nvme_fabrics 00:25:04.960 rmmod nvme_keyring 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # set -e 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # return 0 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # '[' -n 1777305 ']' 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@337 -- # killprocess 1777305 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 1777305 ']' 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 1777305 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1777305 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1777305' 00:25:04.960 killing process with pid 1777305 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 1777305 00:25:04.960 08:22:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 1777305 00:25:05.220 08:22:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:05.220 08:22:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # nvmf_fini 00:25:05.220 08:22:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@254 -- # local dev 00:25:05.220 08:22:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:05.220 08:22:19 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:05.220 08:22:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:05.220 08:22:19 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@121 -- # return 0 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # _dev=0 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # dev_map=() 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@274 -- # iptr 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@548 -- # iptables-save 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@548 -- # iptables-restore 00:25:07.126 00:25:07.126 real 0m9.620s 00:25:07.126 user 0m3.168s 00:25:07.126 sys 0m4.878s 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:25:07.126 ************************************ 00:25:07.126 END TEST nvmf_async_init 00:25:07.126 ************************************ 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:07.126 08:22:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.386 ************************************ 00:25:07.386 START TEST dma 00:25:07.386 ************************************ 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:25:07.386 * Looking for test storage... 00:25:07.386 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lcov --version 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:07.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.386 --rc genhtml_branch_coverage=1 00:25:07.386 --rc genhtml_function_coverage=1 00:25:07.386 --rc genhtml_legend=1 00:25:07.386 --rc geninfo_all_blocks=1 00:25:07.386 --rc geninfo_unexecuted_blocks=1 00:25:07.386 00:25:07.386 ' 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:07.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.386 --rc genhtml_branch_coverage=1 00:25:07.386 --rc genhtml_function_coverage=1 00:25:07.386 --rc genhtml_legend=1 00:25:07.386 --rc geninfo_all_blocks=1 00:25:07.386 --rc geninfo_unexecuted_blocks=1 00:25:07.386 00:25:07.386 ' 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:07.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.386 --rc genhtml_branch_coverage=1 00:25:07.386 --rc genhtml_function_coverage=1 00:25:07.386 --rc genhtml_legend=1 00:25:07.386 --rc geninfo_all_blocks=1 00:25:07.386 --rc geninfo_unexecuted_blocks=1 00:25:07.386 00:25:07.386 ' 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:07.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.386 --rc genhtml_branch_coverage=1 00:25:07.386 --rc genhtml_function_coverage=1 00:25:07.386 --rc genhtml_legend=1 00:25:07.386 --rc geninfo_all_blocks=1 00:25:07.386 --rc geninfo_unexecuted_blocks=1 00:25:07.386 00:25:07.386 ' 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.386 08:22:21 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@50 -- # : 0 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:07.387 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:25:07.387 00:25:07.387 real 0m0.213s 00:25:07.387 user 0m0.131s 00:25:07.387 sys 0m0.096s 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:07.387 08:22:21 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:25:07.387 ************************************ 00:25:07.387 END TEST dma 00:25:07.387 ************************************ 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.647 ************************************ 00:25:07.647 START TEST nvmf_identify 00:25:07.647 ************************************ 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:25:07.647 * Looking for test storage... 00:25:07.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:07.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.647 --rc genhtml_branch_coverage=1 00:25:07.647 --rc genhtml_function_coverage=1 00:25:07.647 --rc genhtml_legend=1 00:25:07.647 --rc geninfo_all_blocks=1 00:25:07.647 --rc geninfo_unexecuted_blocks=1 00:25:07.647 00:25:07.647 ' 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:07.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.647 --rc genhtml_branch_coverage=1 00:25:07.647 --rc genhtml_function_coverage=1 00:25:07.647 --rc genhtml_legend=1 00:25:07.647 --rc geninfo_all_blocks=1 00:25:07.647 --rc geninfo_unexecuted_blocks=1 00:25:07.647 00:25:07.647 ' 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:07.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.647 --rc genhtml_branch_coverage=1 00:25:07.647 --rc genhtml_function_coverage=1 00:25:07.647 --rc genhtml_legend=1 00:25:07.647 --rc geninfo_all_blocks=1 00:25:07.647 --rc geninfo_unexecuted_blocks=1 00:25:07.647 00:25:07.647 ' 00:25:07.647 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:07.648 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:07.648 --rc genhtml_branch_coverage=1 00:25:07.648 --rc genhtml_function_coverage=1 00:25:07.648 --rc genhtml_legend=1 00:25:07.648 --rc geninfo_all_blocks=1 00:25:07.648 --rc geninfo_unexecuted_blocks=1 00:25:07.648 00:25:07.648 ' 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@50 -- # : 0 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:07.648 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # remove_target_ns 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # xtrace_disable 00:25:07.648 08:22:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@131 -- # pci_devs=() 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@131 -- # local -a pci_devs 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@132 -- # pci_net_devs=() 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@133 -- # pci_drivers=() 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@133 -- # local -A pci_drivers 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@135 -- # net_devs=() 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@135 -- # local -ga net_devs 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@136 -- # e810=() 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@136 -- # local -ga e810 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@137 -- # x722=() 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@137 -- # local -ga x722 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@138 -- # mlx=() 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@138 -- # local -ga mlx 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:14.225 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:14.225 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:14.225 Found net devices under 0000:86:00.0: cvl_0_0 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:14.225 Found net devices under 0000:86:00.1: cvl_0_1 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # is_hw=yes 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@247 -- # create_target_ns 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@28 -- # local -g _dev 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # ips=() 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:25:14.225 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772161 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:25:14.226 10.0.0.1 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772162 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:25:14.226 10.0.0.2 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@38 -- # ping_ips 1 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:14.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:14.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.386 ms 00:25:14.226 00:25:14.226 --- 10.0.0.1 ping statistics --- 00:25:14.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.226 rtt min/avg/max/mdev = 0.386/0.386/0.386/0.000 ms 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target0 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:14.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:14.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:25:14.226 00:25:14.226 --- 10.0.0.2 ping statistics --- 00:25:14.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:14.226 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # return 0 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:25:14.226 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # return 1 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev= 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@160 -- # return 0 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target0 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # local dev=target1 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # return 1 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@159 -- # dev= 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@160 -- # return 0 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:25:14.227 ' 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1781048 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1781048 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 1781048 ']' 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:14.227 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:14.227 [2024-11-20 08:22:27.767579] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:25:14.227 [2024-11-20 08:22:27.767626] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:14.227 [2024-11-20 08:22:27.845775] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:14.227 [2024-11-20 08:22:27.888588] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:14.227 [2024-11-20 08:22:27.888625] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:14.227 [2024-11-20 08:22:27.888632] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:14.227 [2024-11-20 08:22:27.888638] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:14.227 [2024-11-20 08:22:27.888643] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:14.227 [2024-11-20 08:22:27.890248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:14.227 [2024-11-20 08:22:27.890300] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:14.227 [2024-11-20 08:22:27.890411] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:14.227 [2024-11-20 08:22:27.890412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:14.227 [2024-11-20 08:22:27.989826] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:14.227 08:22:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:14.227 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:14.227 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.227 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:14.227 Malloc0 00:25:14.227 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.227 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:14.227 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.227 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:14.227 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.227 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:25:14.227 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.227 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:14.227 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.227 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:14.227 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.227 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:14.227 [2024-11-20 08:22:28.085220] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:14.227 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.228 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:14.228 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.228 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:14.228 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.228 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:25:14.228 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.228 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:14.228 [ 00:25:14.228 { 00:25:14.228 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:14.228 "subtype": "Discovery", 00:25:14.228 "listen_addresses": [ 00:25:14.228 { 00:25:14.228 "trtype": "TCP", 00:25:14.228 "adrfam": "IPv4", 00:25:14.228 "traddr": "10.0.0.2", 00:25:14.228 "trsvcid": "4420" 00:25:14.228 } 00:25:14.228 ], 00:25:14.228 "allow_any_host": true, 00:25:14.228 "hosts": [] 00:25:14.228 }, 00:25:14.228 { 00:25:14.228 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:14.228 "subtype": "NVMe", 00:25:14.228 "listen_addresses": [ 00:25:14.228 { 00:25:14.228 "trtype": "TCP", 00:25:14.228 "adrfam": "IPv4", 00:25:14.228 "traddr": "10.0.0.2", 00:25:14.228 "trsvcid": "4420" 00:25:14.228 } 00:25:14.228 ], 00:25:14.228 "allow_any_host": true, 00:25:14.228 "hosts": [], 00:25:14.228 "serial_number": "SPDK00000000000001", 00:25:14.228 "model_number": "SPDK bdev Controller", 00:25:14.228 "max_namespaces": 32, 00:25:14.228 "min_cntlid": 1, 00:25:14.228 "max_cntlid": 65519, 00:25:14.228 "namespaces": [ 00:25:14.228 { 00:25:14.228 "nsid": 1, 00:25:14.228 "bdev_name": "Malloc0", 00:25:14.228 "name": "Malloc0", 00:25:14.228 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:25:14.228 "eui64": "ABCDEF0123456789", 00:25:14.228 "uuid": "a464ad1b-5def-4ac5-9d81-fb23ef9bc762" 00:25:14.228 } 00:25:14.228 ] 00:25:14.228 } 00:25:14.228 ] 00:25:14.228 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.228 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:25:14.228 [2024-11-20 08:22:28.137224] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:25:14.228 [2024-11-20 08:22:28.137257] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1781265 ] 00:25:14.228 [2024-11-20 08:22:28.175602] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:25:14.228 [2024-11-20 08:22:28.175651] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:14.228 [2024-11-20 08:22:28.175656] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:14.228 [2024-11-20 08:22:28.175666] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:14.228 [2024-11-20 08:22:28.175674] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:14.228 [2024-11-20 08:22:28.179499] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:25:14.228 [2024-11-20 08:22:28.179529] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1b0c690 0 00:25:14.228 [2024-11-20 08:22:28.187216] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:14.228 [2024-11-20 08:22:28.187232] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:14.228 [2024-11-20 08:22:28.187237] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:14.228 [2024-11-20 08:22:28.187239] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:14.228 [2024-11-20 08:22:28.187269] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.228 [2024-11-20 08:22:28.187275] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.228 [2024-11-20 08:22:28.187279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b0c690) 00:25:14.228 [2024-11-20 08:22:28.187290] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:14.228 [2024-11-20 08:22:28.187306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e100, cid 0, qid 0 00:25:14.228 [2024-11-20 08:22:28.195211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.228 [2024-11-20 08:22:28.195219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.228 [2024-11-20 08:22:28.195222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.228 [2024-11-20 08:22:28.195227] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e100) on tqpair=0x1b0c690 00:25:14.228 [2024-11-20 08:22:28.195236] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:14.228 [2024-11-20 08:22:28.195243] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:25:14.228 [2024-11-20 08:22:28.195247] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:25:14.228 [2024-11-20 08:22:28.195259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.228 [2024-11-20 08:22:28.195263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.228 [2024-11-20 08:22:28.195266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b0c690) 00:25:14.228 [2024-11-20 08:22:28.195273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.228 [2024-11-20 08:22:28.195285] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e100, cid 0, qid 0 00:25:14.228 [2024-11-20 08:22:28.195449] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.228 [2024-11-20 08:22:28.195455] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.228 [2024-11-20 08:22:28.195458] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.228 [2024-11-20 08:22:28.195462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e100) on tqpair=0x1b0c690 00:25:14.228 [2024-11-20 08:22:28.195467] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:25:14.228 [2024-11-20 08:22:28.195473] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:25:14.228 [2024-11-20 08:22:28.195479] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.228 [2024-11-20 08:22:28.195483] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.228 [2024-11-20 08:22:28.195486] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b0c690) 00:25:14.228 [2024-11-20 08:22:28.195492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.228 [2024-11-20 08:22:28.195502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e100, cid 0, qid 0 00:25:14.228 [2024-11-20 08:22:28.195566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.228 [2024-11-20 08:22:28.195571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.228 [2024-11-20 08:22:28.195574] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.228 [2024-11-20 08:22:28.195578] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e100) on tqpair=0x1b0c690 00:25:14.228 [2024-11-20 08:22:28.195582] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:25:14.228 [2024-11-20 08:22:28.195592] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:14.228 [2024-11-20 08:22:28.195598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.228 [2024-11-20 08:22:28.195601] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.228 [2024-11-20 08:22:28.195604] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b0c690) 00:25:14.228 [2024-11-20 08:22:28.195610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.228 [2024-11-20 08:22:28.195619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e100, cid 0, qid 0 00:25:14.228 [2024-11-20 08:22:28.195685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.228 [2024-11-20 08:22:28.195691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.228 [2024-11-20 08:22:28.195694] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.228 [2024-11-20 08:22:28.195697] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e100) on tqpair=0x1b0c690 00:25:14.228 [2024-11-20 08:22:28.195702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:14.228 [2024-11-20 08:22:28.195710] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.228 [2024-11-20 08:22:28.195713] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.228 [2024-11-20 08:22:28.195716] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b0c690) 00:25:14.228 [2024-11-20 08:22:28.195722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.228 [2024-11-20 08:22:28.195731] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e100, cid 0, qid 0 00:25:14.228 [2024-11-20 08:22:28.195793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.228 [2024-11-20 08:22:28.195799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.228 [2024-11-20 08:22:28.195802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.228 [2024-11-20 08:22:28.195805] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e100) on tqpair=0x1b0c690 00:25:14.228 [2024-11-20 08:22:28.195809] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:14.228 [2024-11-20 08:22:28.195813] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:14.228 [2024-11-20 08:22:28.195821] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:14.228 [2024-11-20 08:22:28.195928] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:25:14.228 [2024-11-20 08:22:28.195932] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:14.228 [2024-11-20 08:22:28.195939] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.195943] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.195946] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b0c690) 00:25:14.229 [2024-11-20 08:22:28.195952] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.229 [2024-11-20 08:22:28.195961] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e100, cid 0, qid 0 00:25:14.229 [2024-11-20 08:22:28.196025] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.229 [2024-11-20 08:22:28.196030] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.229 [2024-11-20 08:22:28.196035] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.196039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e100) on tqpair=0x1b0c690 00:25:14.229 [2024-11-20 08:22:28.196043] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:14.229 [2024-11-20 08:22:28.196050] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.196054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.196057] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b0c690) 00:25:14.229 [2024-11-20 08:22:28.196062] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.229 [2024-11-20 08:22:28.196071] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e100, cid 0, qid 0 00:25:14.229 [2024-11-20 08:22:28.196143] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.229 [2024-11-20 08:22:28.196149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.229 [2024-11-20 08:22:28.196152] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.196155] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e100) on tqpair=0x1b0c690 00:25:14.229 [2024-11-20 08:22:28.196158] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:14.229 [2024-11-20 08:22:28.196163] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:14.229 [2024-11-20 08:22:28.196170] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:25:14.229 [2024-11-20 08:22:28.196179] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:14.229 [2024-11-20 08:22:28.196187] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.196190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b0c690) 00:25:14.229 [2024-11-20 08:22:28.196196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.229 [2024-11-20 08:22:28.196210] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e100, cid 0, qid 0 00:25:14.229 [2024-11-20 08:22:28.196309] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:14.229 [2024-11-20 08:22:28.196315] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:14.229 [2024-11-20 08:22:28.196318] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.196321] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b0c690): datao=0, datal=4096, cccid=0 00:25:14.229 [2024-11-20 08:22:28.196325] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b6e100) on tqpair(0x1b0c690): expected_datao=0, payload_size=4096 00:25:14.229 [2024-11-20 08:22:28.196329] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.196343] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.196348] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.237278] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.229 [2024-11-20 08:22:28.237290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.229 [2024-11-20 08:22:28.237294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.237298] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e100) on tqpair=0x1b0c690 00:25:14.229 [2024-11-20 08:22:28.237305] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:25:14.229 [2024-11-20 08:22:28.237313] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:25:14.229 [2024-11-20 08:22:28.237318] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:25:14.229 [2024-11-20 08:22:28.237326] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:25:14.229 [2024-11-20 08:22:28.237330] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:25:14.229 [2024-11-20 08:22:28.237335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:25:14.229 [2024-11-20 08:22:28.237345] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:14.229 [2024-11-20 08:22:28.237352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.237356] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.237360] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b0c690) 00:25:14.229 [2024-11-20 08:22:28.237366] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:14.229 [2024-11-20 08:22:28.237379] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e100, cid 0, qid 0 00:25:14.229 [2024-11-20 08:22:28.237460] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.229 [2024-11-20 08:22:28.237466] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.229 [2024-11-20 08:22:28.237469] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.237472] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e100) on tqpair=0x1b0c690 00:25:14.229 [2024-11-20 08:22:28.237480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.237484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.237487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1b0c690) 00:25:14.229 [2024-11-20 08:22:28.237492] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.229 [2024-11-20 08:22:28.237498] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.237502] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.237505] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1b0c690) 00:25:14.229 [2024-11-20 08:22:28.237510] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.229 [2024-11-20 08:22:28.237515] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.237518] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.237521] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1b0c690) 00:25:14.229 [2024-11-20 08:22:28.237526] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.229 [2024-11-20 08:22:28.237532] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.237535] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.237538] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c690) 00:25:14.229 [2024-11-20 08:22:28.237543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.229 [2024-11-20 08:22:28.237548] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:14.229 [2024-11-20 08:22:28.237556] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:14.229 [2024-11-20 08:22:28.237564] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.229 [2024-11-20 08:22:28.237568] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b0c690) 00:25:14.229 [2024-11-20 08:22:28.237574] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.229 [2024-11-20 08:22:28.237585] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e100, cid 0, qid 0 00:25:14.229 [2024-11-20 08:22:28.237590] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e280, cid 1, qid 0 00:25:14.230 [2024-11-20 08:22:28.237594] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e400, cid 2, qid 0 00:25:14.230 [2024-11-20 08:22:28.237598] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e580, cid 3, qid 0 00:25:14.230 [2024-11-20 08:22:28.237602] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e700, cid 4, qid 0 00:25:14.230 [2024-11-20 08:22:28.237699] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.230 [2024-11-20 08:22:28.237705] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.230 [2024-11-20 08:22:28.237709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.230 [2024-11-20 08:22:28.237712] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e700) on tqpair=0x1b0c690 00:25:14.230 [2024-11-20 08:22:28.237719] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:25:14.230 [2024-11-20 08:22:28.237724] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:25:14.230 [2024-11-20 08:22:28.237734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.230 [2024-11-20 08:22:28.237738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b0c690) 00:25:14.230 [2024-11-20 08:22:28.237743] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.230 [2024-11-20 08:22:28.237753] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e700, cid 4, qid 0 00:25:14.230 [2024-11-20 08:22:28.237822] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:14.230 [2024-11-20 08:22:28.237827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:14.230 [2024-11-20 08:22:28.237831] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:14.230 [2024-11-20 08:22:28.237834] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b0c690): datao=0, datal=4096, cccid=4 00:25:14.230 [2024-11-20 08:22:28.237838] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b6e700) on tqpair(0x1b0c690): expected_datao=0, payload_size=4096 00:25:14.230 [2024-11-20 08:22:28.237842] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.230 [2024-11-20 08:22:28.237857] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:14.230 [2024-11-20 08:22:28.237861] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:14.230 [2024-11-20 08:22:28.237897] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.230 [2024-11-20 08:22:28.237904] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.230 [2024-11-20 08:22:28.237907] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.230 [2024-11-20 08:22:28.237910] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e700) on tqpair=0x1b0c690 00:25:14.230 [2024-11-20 08:22:28.237921] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:25:14.230 [2024-11-20 08:22:28.237940] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.230 [2024-11-20 08:22:28.237944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b0c690) 00:25:14.230 [2024-11-20 08:22:28.237950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.230 [2024-11-20 08:22:28.237958] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.230 [2024-11-20 08:22:28.237961] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.230 [2024-11-20 08:22:28.237964] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1b0c690) 00:25:14.230 [2024-11-20 08:22:28.237970] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.230 [2024-11-20 08:22:28.237983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e700, cid 4, qid 0 00:25:14.230 [2024-11-20 08:22:28.237989] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e880, cid 5, qid 0 00:25:14.230 [2024-11-20 08:22:28.238085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:14.230 [2024-11-20 08:22:28.238091] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:14.230 [2024-11-20 08:22:28.238095] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:14.230 [2024-11-20 08:22:28.238098] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b0c690): datao=0, datal=1024, cccid=4 00:25:14.230 [2024-11-20 08:22:28.238102] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b6e700) on tqpair(0x1b0c690): expected_datao=0, payload_size=1024 00:25:14.230 [2024-11-20 08:22:28.238106] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.230 [2024-11-20 08:22:28.238111] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:14.230 [2024-11-20 08:22:28.238115] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:14.230 [2024-11-20 08:22:28.238120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.230 [2024-11-20 08:22:28.238125] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.230 [2024-11-20 08:22:28.238128] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.230 [2024-11-20 08:22:28.238132] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e880) on tqpair=0x1b0c690 00:25:14.497 [2024-11-20 08:22:28.281212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.497 [2024-11-20 08:22:28.281224] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.497 [2024-11-20 08:22:28.281227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.497 [2024-11-20 08:22:28.281231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e700) on tqpair=0x1b0c690 00:25:14.497 [2024-11-20 08:22:28.281241] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.497 [2024-11-20 08:22:28.281244] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b0c690) 00:25:14.497 [2024-11-20 08:22:28.281251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.497 [2024-11-20 08:22:28.281268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e700, cid 4, qid 0 00:25:14.498 [2024-11-20 08:22:28.281421] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:14.498 [2024-11-20 08:22:28.281427] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:14.498 [2024-11-20 08:22:28.281430] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:14.498 [2024-11-20 08:22:28.281434] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b0c690): datao=0, datal=3072, cccid=4 00:25:14.498 [2024-11-20 08:22:28.281438] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b6e700) on tqpair(0x1b0c690): expected_datao=0, payload_size=3072 00:25:14.498 [2024-11-20 08:22:28.281442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.498 [2024-11-20 08:22:28.281454] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:14.498 [2024-11-20 08:22:28.281458] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:14.498 [2024-11-20 08:22:28.324212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.498 [2024-11-20 08:22:28.324221] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.498 [2024-11-20 08:22:28.324227] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.498 [2024-11-20 08:22:28.324231] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e700) on tqpair=0x1b0c690 00:25:14.498 [2024-11-20 08:22:28.324240] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.498 [2024-11-20 08:22:28.324243] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1b0c690) 00:25:14.498 [2024-11-20 08:22:28.324250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.498 [2024-11-20 08:22:28.324265] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e700, cid 4, qid 0 00:25:14.498 [2024-11-20 08:22:28.324395] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:14.498 [2024-11-20 08:22:28.324400] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:14.498 [2024-11-20 08:22:28.324403] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:14.498 [2024-11-20 08:22:28.324407] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1b0c690): datao=0, datal=8, cccid=4 00:25:14.498 [2024-11-20 08:22:28.324411] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1b6e700) on tqpair(0x1b0c690): expected_datao=0, payload_size=8 00:25:14.498 [2024-11-20 08:22:28.324415] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.498 [2024-11-20 08:22:28.324420] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:14.498 [2024-11-20 08:22:28.324424] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:14.498 [2024-11-20 08:22:28.366360] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.498 [2024-11-20 08:22:28.366371] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.498 [2024-11-20 08:22:28.366376] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.498 [2024-11-20 08:22:28.366381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e700) on tqpair=0x1b0c690 00:25:14.498 ===================================================== 00:25:14.498 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:14.498 ===================================================== 00:25:14.498 Controller Capabilities/Features 00:25:14.498 ================================ 00:25:14.498 Vendor ID: 0000 00:25:14.498 Subsystem Vendor ID: 0000 00:25:14.498 Serial Number: .................... 00:25:14.498 Model Number: ........................................ 00:25:14.498 Firmware Version: 25.01 00:25:14.498 Recommended Arb Burst: 0 00:25:14.498 IEEE OUI Identifier: 00 00 00 00:25:14.498 Multi-path I/O 00:25:14.498 May have multiple subsystem ports: No 00:25:14.498 May have multiple controllers: No 00:25:14.498 Associated with SR-IOV VF: No 00:25:14.498 Max Data Transfer Size: 131072 00:25:14.498 Max Number of Namespaces: 0 00:25:14.498 Max Number of I/O Queues: 1024 00:25:14.498 NVMe Specification Version (VS): 1.3 00:25:14.498 NVMe Specification Version (Identify): 1.3 00:25:14.498 Maximum Queue Entries: 128 00:25:14.498 Contiguous Queues Required: Yes 00:25:14.498 Arbitration Mechanisms Supported 00:25:14.498 Weighted Round Robin: Not Supported 00:25:14.498 Vendor Specific: Not Supported 00:25:14.498 Reset Timeout: 15000 ms 00:25:14.498 Doorbell Stride: 4 bytes 00:25:14.498 NVM Subsystem Reset: Not Supported 00:25:14.498 Command Sets Supported 00:25:14.498 NVM Command Set: Supported 00:25:14.498 Boot Partition: Not Supported 00:25:14.498 Memory Page Size Minimum: 4096 bytes 00:25:14.498 Memory Page Size Maximum: 4096 bytes 00:25:14.498 Persistent Memory Region: Not Supported 00:25:14.498 Optional Asynchronous Events Supported 00:25:14.498 Namespace Attribute Notices: Not Supported 00:25:14.498 Firmware Activation Notices: Not Supported 00:25:14.498 ANA Change Notices: Not Supported 00:25:14.498 PLE Aggregate Log Change Notices: Not Supported 00:25:14.498 LBA Status Info Alert Notices: Not Supported 00:25:14.498 EGE Aggregate Log Change Notices: Not Supported 00:25:14.498 Normal NVM Subsystem Shutdown event: Not Supported 00:25:14.498 Zone Descriptor Change Notices: Not Supported 00:25:14.498 Discovery Log Change Notices: Supported 00:25:14.498 Controller Attributes 00:25:14.498 128-bit Host Identifier: Not Supported 00:25:14.498 Non-Operational Permissive Mode: Not Supported 00:25:14.498 NVM Sets: Not Supported 00:25:14.498 Read Recovery Levels: Not Supported 00:25:14.498 Endurance Groups: Not Supported 00:25:14.498 Predictable Latency Mode: Not Supported 00:25:14.498 Traffic Based Keep ALive: Not Supported 00:25:14.498 Namespace Granularity: Not Supported 00:25:14.498 SQ Associations: Not Supported 00:25:14.498 UUID List: Not Supported 00:25:14.498 Multi-Domain Subsystem: Not Supported 00:25:14.498 Fixed Capacity Management: Not Supported 00:25:14.498 Variable Capacity Management: Not Supported 00:25:14.498 Delete Endurance Group: Not Supported 00:25:14.498 Delete NVM Set: Not Supported 00:25:14.498 Extended LBA Formats Supported: Not Supported 00:25:14.498 Flexible Data Placement Supported: Not Supported 00:25:14.498 00:25:14.498 Controller Memory Buffer Support 00:25:14.498 ================================ 00:25:14.498 Supported: No 00:25:14.498 00:25:14.498 Persistent Memory Region Support 00:25:14.498 ================================ 00:25:14.498 Supported: No 00:25:14.498 00:25:14.498 Admin Command Set Attributes 00:25:14.498 ============================ 00:25:14.498 Security Send/Receive: Not Supported 00:25:14.499 Format NVM: Not Supported 00:25:14.499 Firmware Activate/Download: Not Supported 00:25:14.499 Namespace Management: Not Supported 00:25:14.499 Device Self-Test: Not Supported 00:25:14.499 Directives: Not Supported 00:25:14.499 NVMe-MI: Not Supported 00:25:14.499 Virtualization Management: Not Supported 00:25:14.499 Doorbell Buffer Config: Not Supported 00:25:14.499 Get LBA Status Capability: Not Supported 00:25:14.499 Command & Feature Lockdown Capability: Not Supported 00:25:14.499 Abort Command Limit: 1 00:25:14.499 Async Event Request Limit: 4 00:25:14.499 Number of Firmware Slots: N/A 00:25:14.499 Firmware Slot 1 Read-Only: N/A 00:25:14.499 Firmware Activation Without Reset: N/A 00:25:14.499 Multiple Update Detection Support: N/A 00:25:14.499 Firmware Update Granularity: No Information Provided 00:25:14.499 Per-Namespace SMART Log: No 00:25:14.499 Asymmetric Namespace Access Log Page: Not Supported 00:25:14.499 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:14.499 Command Effects Log Page: Not Supported 00:25:14.499 Get Log Page Extended Data: Supported 00:25:14.499 Telemetry Log Pages: Not Supported 00:25:14.499 Persistent Event Log Pages: Not Supported 00:25:14.499 Supported Log Pages Log Page: May Support 00:25:14.499 Commands Supported & Effects Log Page: Not Supported 00:25:14.499 Feature Identifiers & Effects Log Page:May Support 00:25:14.499 NVMe-MI Commands & Effects Log Page: May Support 00:25:14.499 Data Area 4 for Telemetry Log: Not Supported 00:25:14.499 Error Log Page Entries Supported: 128 00:25:14.499 Keep Alive: Not Supported 00:25:14.499 00:25:14.499 NVM Command Set Attributes 00:25:14.499 ========================== 00:25:14.499 Submission Queue Entry Size 00:25:14.499 Max: 1 00:25:14.499 Min: 1 00:25:14.499 Completion Queue Entry Size 00:25:14.499 Max: 1 00:25:14.499 Min: 1 00:25:14.499 Number of Namespaces: 0 00:25:14.499 Compare Command: Not Supported 00:25:14.499 Write Uncorrectable Command: Not Supported 00:25:14.499 Dataset Management Command: Not Supported 00:25:14.499 Write Zeroes Command: Not Supported 00:25:14.499 Set Features Save Field: Not Supported 00:25:14.499 Reservations: Not Supported 00:25:14.499 Timestamp: Not Supported 00:25:14.499 Copy: Not Supported 00:25:14.499 Volatile Write Cache: Not Present 00:25:14.499 Atomic Write Unit (Normal): 1 00:25:14.499 Atomic Write Unit (PFail): 1 00:25:14.499 Atomic Compare & Write Unit: 1 00:25:14.499 Fused Compare & Write: Supported 00:25:14.499 Scatter-Gather List 00:25:14.499 SGL Command Set: Supported 00:25:14.499 SGL Keyed: Supported 00:25:14.499 SGL Bit Bucket Descriptor: Not Supported 00:25:14.499 SGL Metadata Pointer: Not Supported 00:25:14.499 Oversized SGL: Not Supported 00:25:14.499 SGL Metadata Address: Not Supported 00:25:14.499 SGL Offset: Supported 00:25:14.499 Transport SGL Data Block: Not Supported 00:25:14.499 Replay Protected Memory Block: Not Supported 00:25:14.499 00:25:14.499 Firmware Slot Information 00:25:14.499 ========================= 00:25:14.499 Active slot: 0 00:25:14.499 00:25:14.499 00:25:14.499 Error Log 00:25:14.499 ========= 00:25:14.499 00:25:14.499 Active Namespaces 00:25:14.499 ================= 00:25:14.499 Discovery Log Page 00:25:14.499 ================== 00:25:14.499 Generation Counter: 2 00:25:14.499 Number of Records: 2 00:25:14.499 Record Format: 0 00:25:14.499 00:25:14.499 Discovery Log Entry 0 00:25:14.499 ---------------------- 00:25:14.499 Transport Type: 3 (TCP) 00:25:14.499 Address Family: 1 (IPv4) 00:25:14.499 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:14.499 Entry Flags: 00:25:14.499 Duplicate Returned Information: 1 00:25:14.499 Explicit Persistent Connection Support for Discovery: 1 00:25:14.499 Transport Requirements: 00:25:14.499 Secure Channel: Not Required 00:25:14.499 Port ID: 0 (0x0000) 00:25:14.499 Controller ID: 65535 (0xffff) 00:25:14.499 Admin Max SQ Size: 128 00:25:14.499 Transport Service Identifier: 4420 00:25:14.499 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:14.499 Transport Address: 10.0.0.2 00:25:14.499 Discovery Log Entry 1 00:25:14.499 ---------------------- 00:25:14.499 Transport Type: 3 (TCP) 00:25:14.499 Address Family: 1 (IPv4) 00:25:14.499 Subsystem Type: 2 (NVM Subsystem) 00:25:14.499 Entry Flags: 00:25:14.499 Duplicate Returned Information: 0 00:25:14.499 Explicit Persistent Connection Support for Discovery: 0 00:25:14.499 Transport Requirements: 00:25:14.499 Secure Channel: Not Required 00:25:14.499 Port ID: 0 (0x0000) 00:25:14.499 Controller ID: 65535 (0xffff) 00:25:14.499 Admin Max SQ Size: 128 00:25:14.499 Transport Service Identifier: 4420 00:25:14.499 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:25:14.499 Transport Address: 10.0.0.2 [2024-11-20 08:22:28.366469] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:25:14.499 [2024-11-20 08:22:28.366481] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e100) on tqpair=0x1b0c690 00:25:14.499 [2024-11-20 08:22:28.366487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.499 [2024-11-20 08:22:28.366492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e280) on tqpair=0x1b0c690 00:25:14.499 [2024-11-20 08:22:28.366496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.499 [2024-11-20 08:22:28.366500] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e400) on tqpair=0x1b0c690 00:25:14.499 [2024-11-20 08:22:28.366504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.499 [2024-11-20 08:22:28.366508] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e580) on tqpair=0x1b0c690 00:25:14.499 [2024-11-20 08:22:28.366512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.499 [2024-11-20 08:22:28.366522] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.499 [2024-11-20 08:22:28.366525] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.499 [2024-11-20 08:22:28.366529] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c690) 00:25:14.499 [2024-11-20 08:22:28.366535] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.499 [2024-11-20 08:22:28.366550] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e580, cid 3, qid 0 00:25:14.499 [2024-11-20 08:22:28.366614] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.499 [2024-11-20 08:22:28.366620] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.499 [2024-11-20 08:22:28.366626] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.499 [2024-11-20 08:22:28.366629] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e580) on tqpair=0x1b0c690 00:25:14.499 [2024-11-20 08:22:28.366635] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.499 [2024-11-20 08:22:28.366639] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.499 [2024-11-20 08:22:28.366642] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c690) 00:25:14.499 [2024-11-20 08:22:28.366648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.499 [2024-11-20 08:22:28.366661] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e580, cid 3, qid 0 00:25:14.499 [2024-11-20 08:22:28.366732] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.500 [2024-11-20 08:22:28.366738] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.500 [2024-11-20 08:22:28.366741] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.366744] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e580) on tqpair=0x1b0c690 00:25:14.500 [2024-11-20 08:22:28.366748] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:25:14.500 [2024-11-20 08:22:28.366752] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:25:14.500 [2024-11-20 08:22:28.366760] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.366763] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.366766] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c690) 00:25:14.500 [2024-11-20 08:22:28.366772] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.500 [2024-11-20 08:22:28.366781] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e580, cid 3, qid 0 00:25:14.500 [2024-11-20 08:22:28.366844] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.500 [2024-11-20 08:22:28.366850] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.500 [2024-11-20 08:22:28.366853] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.366856] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e580) on tqpair=0x1b0c690 00:25:14.500 [2024-11-20 08:22:28.366864] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.366868] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.366871] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c690) 00:25:14.500 [2024-11-20 08:22:28.366876] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.500 [2024-11-20 08:22:28.366885] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e580, cid 3, qid 0 00:25:14.500 [2024-11-20 08:22:28.366952] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.500 [2024-11-20 08:22:28.366958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.500 [2024-11-20 08:22:28.366960] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.366964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e580) on tqpair=0x1b0c690 00:25:14.500 [2024-11-20 08:22:28.366972] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.366975] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.366978] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c690) 00:25:14.500 [2024-11-20 08:22:28.366983] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.500 [2024-11-20 08:22:28.366993] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e580, cid 3, qid 0 00:25:14.500 [2024-11-20 08:22:28.367063] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.500 [2024-11-20 08:22:28.367068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.500 [2024-11-20 08:22:28.367071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367075] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e580) on tqpair=0x1b0c690 00:25:14.500 [2024-11-20 08:22:28.367083] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367087] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367090] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c690) 00:25:14.500 [2024-11-20 08:22:28.367095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.500 [2024-11-20 08:22:28.367105] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e580, cid 3, qid 0 00:25:14.500 [2024-11-20 08:22:28.367163] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.500 [2024-11-20 08:22:28.367168] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.500 [2024-11-20 08:22:28.367171] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367175] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e580) on tqpair=0x1b0c690 00:25:14.500 [2024-11-20 08:22:28.367182] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367189] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c690) 00:25:14.500 [2024-11-20 08:22:28.367195] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.500 [2024-11-20 08:22:28.367212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e580, cid 3, qid 0 00:25:14.500 [2024-11-20 08:22:28.367278] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.500 [2024-11-20 08:22:28.367284] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.500 [2024-11-20 08:22:28.367287] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367290] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e580) on tqpair=0x1b0c690 00:25:14.500 [2024-11-20 08:22:28.367298] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367302] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367305] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c690) 00:25:14.500 [2024-11-20 08:22:28.367310] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.500 [2024-11-20 08:22:28.367320] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e580, cid 3, qid 0 00:25:14.500 [2024-11-20 08:22:28.367384] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.500 [2024-11-20 08:22:28.367389] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.500 [2024-11-20 08:22:28.367392] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367395] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e580) on tqpair=0x1b0c690 00:25:14.500 [2024-11-20 08:22:28.367403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367410] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c690) 00:25:14.500 [2024-11-20 08:22:28.367415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.500 [2024-11-20 08:22:28.367424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e580, cid 3, qid 0 00:25:14.500 [2024-11-20 08:22:28.367493] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.500 [2024-11-20 08:22:28.367500] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.500 [2024-11-20 08:22:28.367504] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367507] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e580) on tqpair=0x1b0c690 00:25:14.500 [2024-11-20 08:22:28.367516] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367519] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367522] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c690) 00:25:14.500 [2024-11-20 08:22:28.367528] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.500 [2024-11-20 08:22:28.367537] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e580, cid 3, qid 0 00:25:14.500 [2024-11-20 08:22:28.367595] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.500 [2024-11-20 08:22:28.367600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.500 [2024-11-20 08:22:28.367603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e580) on tqpair=0x1b0c690 00:25:14.500 [2024-11-20 08:22:28.367614] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367617] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367621] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c690) 00:25:14.500 [2024-11-20 08:22:28.367626] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.500 [2024-11-20 08:22:28.367635] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e580, cid 3, qid 0 00:25:14.500 [2024-11-20 08:22:28.367701] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.500 [2024-11-20 08:22:28.367706] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.500 [2024-11-20 08:22:28.367709] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367712] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e580) on tqpair=0x1b0c690 00:25:14.500 [2024-11-20 08:22:28.367720] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367724] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367727] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c690) 00:25:14.500 [2024-11-20 08:22:28.367732] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.500 [2024-11-20 08:22:28.367742] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e580, cid 3, qid 0 00:25:14.500 [2024-11-20 08:22:28.367800] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.500 [2024-11-20 08:22:28.367805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.500 [2024-11-20 08:22:28.367808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e580) on tqpair=0x1b0c690 00:25:14.500 [2024-11-20 08:22:28.367819] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367822] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367825] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c690) 00:25:14.500 [2024-11-20 08:22:28.367830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.500 [2024-11-20 08:22:28.367839] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e580, cid 3, qid 0 00:25:14.500 [2024-11-20 08:22:28.367900] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.500 [2024-11-20 08:22:28.367906] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.500 [2024-11-20 08:22:28.367910] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367914] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e580) on tqpair=0x1b0c690 00:25:14.500 [2024-11-20 08:22:28.367921] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.500 [2024-11-20 08:22:28.367925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.367928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c690) 00:25:14.501 [2024-11-20 08:22:28.367933] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-11-20 08:22:28.367942] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e580, cid 3, qid 0 00:25:14.501 [2024-11-20 08:22:28.368000] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.501 [2024-11-20 08:22:28.368006] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.501 [2024-11-20 08:22:28.368009] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.368012] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e580) on tqpair=0x1b0c690 00:25:14.501 [2024-11-20 08:22:28.368020] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.368023] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.368026] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c690) 00:25:14.501 [2024-11-20 08:22:28.368031] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-11-20 08:22:28.368041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e580, cid 3, qid 0 00:25:14.501 [2024-11-20 08:22:28.368111] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.501 [2024-11-20 08:22:28.368116] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.501 [2024-11-20 08:22:28.368119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.368122] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e580) on tqpair=0x1b0c690 00:25:14.501 [2024-11-20 08:22:28.368130] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.368134] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.368137] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c690) 00:25:14.501 [2024-11-20 08:22:28.368142] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-11-20 08:22:28.368151] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e580, cid 3, qid 0 00:25:14.501 [2024-11-20 08:22:28.372210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.501 [2024-11-20 08:22:28.372217] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.501 [2024-11-20 08:22:28.372220] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.372224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e580) on tqpair=0x1b0c690 00:25:14.501 [2024-11-20 08:22:28.372233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.372237] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.372239] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1b0c690) 00:25:14.501 [2024-11-20 08:22:28.372245] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-11-20 08:22:28.372256] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1b6e580, cid 3, qid 0 00:25:14.501 [2024-11-20 08:22:28.372404] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.501 [2024-11-20 08:22:28.372409] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.501 [2024-11-20 08:22:28.372412] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.372418] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1b6e580) on tqpair=0x1b0c690 00:25:14.501 [2024-11-20 08:22:28.372424] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:25:14.501 00:25:14.501 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:25:14.501 [2024-11-20 08:22:28.408849] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:25:14.501 [2024-11-20 08:22:28.408883] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1781285 ] 00:25:14.501 [2024-11-20 08:22:28.445690] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:25:14.501 [2024-11-20 08:22:28.445730] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:25:14.501 [2024-11-20 08:22:28.445736] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:25:14.501 [2024-11-20 08:22:28.445745] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:25:14.501 [2024-11-20 08:22:28.445755] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:25:14.501 [2024-11-20 08:22:28.453364] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:25:14.501 [2024-11-20 08:22:28.453394] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x16b6690 0 00:25:14.501 [2024-11-20 08:22:28.453565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:25:14.501 [2024-11-20 08:22:28.453572] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:25:14.501 [2024-11-20 08:22:28.453576] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:25:14.501 [2024-11-20 08:22:28.453578] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:25:14.501 [2024-11-20 08:22:28.453602] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.453607] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.453610] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b6690) 00:25:14.501 [2024-11-20 08:22:28.453620] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:25:14.501 [2024-11-20 08:22:28.453632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718100, cid 0, qid 0 00:25:14.501 [2024-11-20 08:22:28.461209] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.501 [2024-11-20 08:22:28.461218] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.501 [2024-11-20 08:22:28.461221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.461224] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718100) on tqpair=0x16b6690 00:25:14.501 [2024-11-20 08:22:28.461236] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:25:14.501 [2024-11-20 08:22:28.461242] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:25:14.501 [2024-11-20 08:22:28.461247] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:25:14.501 [2024-11-20 08:22:28.461258] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.461261] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.461265] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b6690) 00:25:14.501 [2024-11-20 08:22:28.461273] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-11-20 08:22:28.461285] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718100, cid 0, qid 0 00:25:14.501 [2024-11-20 08:22:28.461442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.501 [2024-11-20 08:22:28.461447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.501 [2024-11-20 08:22:28.461450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.461454] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718100) on tqpair=0x16b6690 00:25:14.501 [2024-11-20 08:22:28.461458] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:25:14.501 [2024-11-20 08:22:28.461465] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:25:14.501 [2024-11-20 08:22:28.461471] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.461474] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.461477] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b6690) 00:25:14.501 [2024-11-20 08:22:28.461483] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-11-20 08:22:28.461493] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718100, cid 0, qid 0 00:25:14.501 [2024-11-20 08:22:28.461593] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.501 [2024-11-20 08:22:28.461599] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.501 [2024-11-20 08:22:28.461602] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.461606] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718100) on tqpair=0x16b6690 00:25:14.501 [2024-11-20 08:22:28.461610] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:25:14.501 [2024-11-20 08:22:28.461616] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:25:14.501 [2024-11-20 08:22:28.461622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.461625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.461628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b6690) 00:25:14.501 [2024-11-20 08:22:28.461634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.501 [2024-11-20 08:22:28.461644] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718100, cid 0, qid 0 00:25:14.501 [2024-11-20 08:22:28.461744] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.501 [2024-11-20 08:22:28.461750] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.501 [2024-11-20 08:22:28.461753] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.461756] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718100) on tqpair=0x16b6690 00:25:14.501 [2024-11-20 08:22:28.461760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:25:14.501 [2024-11-20 08:22:28.461768] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.461772] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.501 [2024-11-20 08:22:28.461775] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b6690) 00:25:14.501 [2024-11-20 08:22:28.461781] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.502 [2024-11-20 08:22:28.461790] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718100, cid 0, qid 0 00:25:14.502 [2024-11-20 08:22:28.461895] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.502 [2024-11-20 08:22:28.461901] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.502 [2024-11-20 08:22:28.461904] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.461907] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718100) on tqpair=0x16b6690 00:25:14.502 [2024-11-20 08:22:28.461911] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:25:14.502 [2024-11-20 08:22:28.461915] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:25:14.502 [2024-11-20 08:22:28.461922] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:25:14.502 [2024-11-20 08:22:28.462029] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:25:14.502 [2024-11-20 08:22:28.462033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:25:14.502 [2024-11-20 08:22:28.462040] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462043] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462046] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b6690) 00:25:14.502 [2024-11-20 08:22:28.462052] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.502 [2024-11-20 08:22:28.462061] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718100, cid 0, qid 0 00:25:14.502 [2024-11-20 08:22:28.462122] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.502 [2024-11-20 08:22:28.462127] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.502 [2024-11-20 08:22:28.462130] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462133] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718100) on tqpair=0x16b6690 00:25:14.502 [2024-11-20 08:22:28.462137] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:25:14.502 [2024-11-20 08:22:28.462146] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462149] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b6690) 00:25:14.502 [2024-11-20 08:22:28.462158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.502 [2024-11-20 08:22:28.462167] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718100, cid 0, qid 0 00:25:14.502 [2024-11-20 08:22:28.462277] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.502 [2024-11-20 08:22:28.462283] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.502 [2024-11-20 08:22:28.462286] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462289] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718100) on tqpair=0x16b6690 00:25:14.502 [2024-11-20 08:22:28.462293] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:25:14.502 [2024-11-20 08:22:28.462297] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:25:14.502 [2024-11-20 08:22:28.462304] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:25:14.502 [2024-11-20 08:22:28.462311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:25:14.502 [2024-11-20 08:22:28.462322] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462325] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b6690) 00:25:14.502 [2024-11-20 08:22:28.462331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.502 [2024-11-20 08:22:28.462341] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718100, cid 0, qid 0 00:25:14.502 [2024-11-20 08:22:28.462438] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:14.502 [2024-11-20 08:22:28.462444] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:14.502 [2024-11-20 08:22:28.462447] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462450] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b6690): datao=0, datal=4096, cccid=0 00:25:14.502 [2024-11-20 08:22:28.462454] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1718100) on tqpair(0x16b6690): expected_datao=0, payload_size=4096 00:25:14.502 [2024-11-20 08:22:28.462458] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462464] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462467] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462528] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.502 [2024-11-20 08:22:28.462534] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.502 [2024-11-20 08:22:28.462537] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462540] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718100) on tqpair=0x16b6690 00:25:14.502 [2024-11-20 08:22:28.462546] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:25:14.502 [2024-11-20 08:22:28.462551] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:25:14.502 [2024-11-20 08:22:28.462555] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:25:14.502 [2024-11-20 08:22:28.462561] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:25:14.502 [2024-11-20 08:22:28.462565] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:25:14.502 [2024-11-20 08:22:28.462569] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:25:14.502 [2024-11-20 08:22:28.462580] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:25:14.502 [2024-11-20 08:22:28.462586] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462589] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462592] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b6690) 00:25:14.502 [2024-11-20 08:22:28.462598] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:14.502 [2024-11-20 08:22:28.462609] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718100, cid 0, qid 0 00:25:14.502 [2024-11-20 08:22:28.462681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.502 [2024-11-20 08:22:28.462686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.502 [2024-11-20 08:22:28.462689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718100) on tqpair=0x16b6690 00:25:14.502 [2024-11-20 08:22:28.462698] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x16b6690) 00:25:14.502 [2024-11-20 08:22:28.462711] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.502 [2024-11-20 08:22:28.462717] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462720] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462723] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x16b6690) 00:25:14.502 [2024-11-20 08:22:28.462728] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.502 [2024-11-20 08:22:28.462733] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462736] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462739] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x16b6690) 00:25:14.502 [2024-11-20 08:22:28.462744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.502 [2024-11-20 08:22:28.462749] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462752] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.502 [2024-11-20 08:22:28.462760] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.502 [2024-11-20 08:22:28.462764] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:25:14.502 [2024-11-20 08:22:28.462772] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:25:14.502 [2024-11-20 08:22:28.462777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b6690) 00:25:14.502 [2024-11-20 08:22:28.462786] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.502 [2024-11-20 08:22:28.462796] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718100, cid 0, qid 0 00:25:14.502 [2024-11-20 08:22:28.462801] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718280, cid 1, qid 0 00:25:14.502 [2024-11-20 08:22:28.462805] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718400, cid 2, qid 0 00:25:14.502 [2024-11-20 08:22:28.462809] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.502 [2024-11-20 08:22:28.462812] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718700, cid 4, qid 0 00:25:14.502 [2024-11-20 08:22:28.462932] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.502 [2024-11-20 08:22:28.462938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.502 [2024-11-20 08:22:28.462941] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.502 [2024-11-20 08:22:28.462944] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718700) on tqpair=0x16b6690 00:25:14.502 [2024-11-20 08:22:28.462951] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:25:14.502 [2024-11-20 08:22:28.462955] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:25:14.503 [2024-11-20 08:22:28.462963] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:25:14.503 [2024-11-20 08:22:28.462968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:25:14.503 [2024-11-20 08:22:28.462975] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.462978] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.462981] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b6690) 00:25:14.503 [2024-11-20 08:22:28.462987] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:25:14.503 [2024-11-20 08:22:28.462997] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718700, cid 4, qid 0 00:25:14.503 [2024-11-20 08:22:28.463058] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.503 [2024-11-20 08:22:28.463064] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.503 [2024-11-20 08:22:28.463067] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718700) on tqpair=0x16b6690 00:25:14.503 [2024-11-20 08:22:28.463121] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:25:14.503 [2024-11-20 08:22:28.463131] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:25:14.503 [2024-11-20 08:22:28.463137] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463140] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b6690) 00:25:14.503 [2024-11-20 08:22:28.463146] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.503 [2024-11-20 08:22:28.463155] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718700, cid 4, qid 0 00:25:14.503 [2024-11-20 08:22:28.463242] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:14.503 [2024-11-20 08:22:28.463249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:14.503 [2024-11-20 08:22:28.463252] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463255] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b6690): datao=0, datal=4096, cccid=4 00:25:14.503 [2024-11-20 08:22:28.463259] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1718700) on tqpair(0x16b6690): expected_datao=0, payload_size=4096 00:25:14.503 [2024-11-20 08:22:28.463263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463268] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463272] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463281] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.503 [2024-11-20 08:22:28.463286] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.503 [2024-11-20 08:22:28.463289] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463292] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718700) on tqpair=0x16b6690 00:25:14.503 [2024-11-20 08:22:28.463301] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:25:14.503 [2024-11-20 08:22:28.463313] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:25:14.503 [2024-11-20 08:22:28.463323] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:25:14.503 [2024-11-20 08:22:28.463329] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463332] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b6690) 00:25:14.503 [2024-11-20 08:22:28.463338] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.503 [2024-11-20 08:22:28.463350] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718700, cid 4, qid 0 00:25:14.503 [2024-11-20 08:22:28.463442] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:14.503 [2024-11-20 08:22:28.463448] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:14.503 [2024-11-20 08:22:28.463451] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463454] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b6690): datao=0, datal=4096, cccid=4 00:25:14.503 [2024-11-20 08:22:28.463457] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1718700) on tqpair(0x16b6690): expected_datao=0, payload_size=4096 00:25:14.503 [2024-11-20 08:22:28.463461] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463466] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463469] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463481] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.503 [2024-11-20 08:22:28.463486] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.503 [2024-11-20 08:22:28.463489] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463492] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718700) on tqpair=0x16b6690 00:25:14.503 [2024-11-20 08:22:28.463505] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:25:14.503 [2024-11-20 08:22:28.463515] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:25:14.503 [2024-11-20 08:22:28.463521] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463524] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b6690) 00:25:14.503 [2024-11-20 08:22:28.463530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.503 [2024-11-20 08:22:28.463540] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718700, cid 4, qid 0 00:25:14.503 [2024-11-20 08:22:28.463645] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:14.503 [2024-11-20 08:22:28.463651] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:14.503 [2024-11-20 08:22:28.463654] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463656] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b6690): datao=0, datal=4096, cccid=4 00:25:14.503 [2024-11-20 08:22:28.463660] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1718700) on tqpair(0x16b6690): expected_datao=0, payload_size=4096 00:25:14.503 [2024-11-20 08:22:28.463664] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463669] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463673] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.503 [2024-11-20 08:22:28.463686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.503 [2024-11-20 08:22:28.463689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463693] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718700) on tqpair=0x16b6690 00:25:14.503 [2024-11-20 08:22:28.463699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:25:14.503 [2024-11-20 08:22:28.463707] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:25:14.503 [2024-11-20 08:22:28.463716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:25:14.503 [2024-11-20 08:22:28.463723] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:25:14.503 [2024-11-20 08:22:28.463728] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:25:14.503 [2024-11-20 08:22:28.463732] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:25:14.503 [2024-11-20 08:22:28.463737] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:25:14.503 [2024-11-20 08:22:28.463741] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:25:14.503 [2024-11-20 08:22:28.463746] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:25:14.503 [2024-11-20 08:22:28.463758] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b6690) 00:25:14.503 [2024-11-20 08:22:28.463767] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.503 [2024-11-20 08:22:28.463773] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463776] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.503 [2024-11-20 08:22:28.463779] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16b6690) 00:25:14.504 [2024-11-20 08:22:28.463784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:25:14.504 [2024-11-20 08:22:28.463797] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718700, cid 4, qid 0 00:25:14.504 [2024-11-20 08:22:28.463802] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718880, cid 5, qid 0 00:25:14.504 [2024-11-20 08:22:28.463936] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.504 [2024-11-20 08:22:28.463942] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.504 [2024-11-20 08:22:28.463945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.463948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718700) on tqpair=0x16b6690 00:25:14.504 [2024-11-20 08:22:28.463954] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.504 [2024-11-20 08:22:28.463958] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.504 [2024-11-20 08:22:28.463961] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.463964] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718880) on tqpair=0x16b6690 00:25:14.504 [2024-11-20 08:22:28.463973] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.463977] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16b6690) 00:25:14.504 [2024-11-20 08:22:28.463982] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.504 [2024-11-20 08:22:28.463992] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718880, cid 5, qid 0 00:25:14.504 [2024-11-20 08:22:28.464069] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.504 [2024-11-20 08:22:28.464075] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.504 [2024-11-20 08:22:28.464078] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464081] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718880) on tqpair=0x16b6690 00:25:14.504 [2024-11-20 08:22:28.464089] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464092] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16b6690) 00:25:14.504 [2024-11-20 08:22:28.464097] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.504 [2024-11-20 08:22:28.464108] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718880, cid 5, qid 0 00:25:14.504 [2024-11-20 08:22:28.464228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.504 [2024-11-20 08:22:28.464234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.504 [2024-11-20 08:22:28.464237] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718880) on tqpair=0x16b6690 00:25:14.504 [2024-11-20 08:22:28.464248] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464251] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16b6690) 00:25:14.504 [2024-11-20 08:22:28.464257] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.504 [2024-11-20 08:22:28.464266] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718880, cid 5, qid 0 00:25:14.504 [2024-11-20 08:22:28.464329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.504 [2024-11-20 08:22:28.464335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.504 [2024-11-20 08:22:28.464338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718880) on tqpair=0x16b6690 00:25:14.504 [2024-11-20 08:22:28.464354] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464358] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x16b6690) 00:25:14.504 [2024-11-20 08:22:28.464364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.504 [2024-11-20 08:22:28.464369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464373] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x16b6690) 00:25:14.504 [2024-11-20 08:22:28.464378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.504 [2024-11-20 08:22:28.464384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x16b6690) 00:25:14.504 [2024-11-20 08:22:28.464392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.504 [2024-11-20 08:22:28.464399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464402] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x16b6690) 00:25:14.504 [2024-11-20 08:22:28.464407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.504 [2024-11-20 08:22:28.464417] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718880, cid 5, qid 0 00:25:14.504 [2024-11-20 08:22:28.464422] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718700, cid 4, qid 0 00:25:14.504 [2024-11-20 08:22:28.464426] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718a00, cid 6, qid 0 00:25:14.504 [2024-11-20 08:22:28.464430] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718b80, cid 7, qid 0 00:25:14.504 [2024-11-20 08:22:28.464565] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:14.504 [2024-11-20 08:22:28.464571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:14.504 [2024-11-20 08:22:28.464573] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464577] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b6690): datao=0, datal=8192, cccid=5 00:25:14.504 [2024-11-20 08:22:28.464584] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1718880) on tqpair(0x16b6690): expected_datao=0, payload_size=8192 00:25:14.504 [2024-11-20 08:22:28.464588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464621] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464625] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464630] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:14.504 [2024-11-20 08:22:28.464635] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:14.504 [2024-11-20 08:22:28.464638] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464641] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b6690): datao=0, datal=512, cccid=4 00:25:14.504 [2024-11-20 08:22:28.464644] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1718700) on tqpair(0x16b6690): expected_datao=0, payload_size=512 00:25:14.504 [2024-11-20 08:22:28.464648] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464653] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464656] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464661] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:14.504 [2024-11-20 08:22:28.464665] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:14.504 [2024-11-20 08:22:28.464668] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464671] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b6690): datao=0, datal=512, cccid=6 00:25:14.504 [2024-11-20 08:22:28.464675] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1718a00) on tqpair(0x16b6690): expected_datao=0, payload_size=512 00:25:14.504 [2024-11-20 08:22:28.464679] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464684] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464687] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464692] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:25:14.504 [2024-11-20 08:22:28.464696] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:25:14.504 [2024-11-20 08:22:28.464699] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464702] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x16b6690): datao=0, datal=4096, cccid=7 00:25:14.504 [2024-11-20 08:22:28.464706] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1718b80) on tqpair(0x16b6690): expected_datao=0, payload_size=4096 00:25:14.504 [2024-11-20 08:22:28.464709] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464715] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464718] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464725] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.504 [2024-11-20 08:22:28.464730] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.504 [2024-11-20 08:22:28.464733] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464736] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718880) on tqpair=0x16b6690 00:25:14.504 [2024-11-20 08:22:28.464747] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.504 [2024-11-20 08:22:28.464752] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.504 [2024-11-20 08:22:28.464755] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464758] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718700) on tqpair=0x16b6690 00:25:14.504 [2024-11-20 08:22:28.464767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.504 [2024-11-20 08:22:28.464772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.504 [2024-11-20 08:22:28.464776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464779] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718a00) on tqpair=0x16b6690 00:25:14.504 [2024-11-20 08:22:28.464785] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.504 [2024-11-20 08:22:28.464790] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.504 [2024-11-20 08:22:28.464793] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.504 [2024-11-20 08:22:28.464796] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718b80) on tqpair=0x16b6690 00:25:14.504 ===================================================== 00:25:14.504 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:14.504 ===================================================== 00:25:14.504 Controller Capabilities/Features 00:25:14.504 ================================ 00:25:14.504 Vendor ID: 8086 00:25:14.504 Subsystem Vendor ID: 8086 00:25:14.504 Serial Number: SPDK00000000000001 00:25:14.505 Model Number: SPDK bdev Controller 00:25:14.505 Firmware Version: 25.01 00:25:14.505 Recommended Arb Burst: 6 00:25:14.505 IEEE OUI Identifier: e4 d2 5c 00:25:14.505 Multi-path I/O 00:25:14.505 May have multiple subsystem ports: Yes 00:25:14.505 May have multiple controllers: Yes 00:25:14.505 Associated with SR-IOV VF: No 00:25:14.505 Max Data Transfer Size: 131072 00:25:14.505 Max Number of Namespaces: 32 00:25:14.505 Max Number of I/O Queues: 127 00:25:14.505 NVMe Specification Version (VS): 1.3 00:25:14.505 NVMe Specification Version (Identify): 1.3 00:25:14.505 Maximum Queue Entries: 128 00:25:14.505 Contiguous Queues Required: Yes 00:25:14.505 Arbitration Mechanisms Supported 00:25:14.505 Weighted Round Robin: Not Supported 00:25:14.505 Vendor Specific: Not Supported 00:25:14.505 Reset Timeout: 15000 ms 00:25:14.505 Doorbell Stride: 4 bytes 00:25:14.505 NVM Subsystem Reset: Not Supported 00:25:14.505 Command Sets Supported 00:25:14.505 NVM Command Set: Supported 00:25:14.505 Boot Partition: Not Supported 00:25:14.505 Memory Page Size Minimum: 4096 bytes 00:25:14.505 Memory Page Size Maximum: 4096 bytes 00:25:14.505 Persistent Memory Region: Not Supported 00:25:14.505 Optional Asynchronous Events Supported 00:25:14.505 Namespace Attribute Notices: Supported 00:25:14.505 Firmware Activation Notices: Not Supported 00:25:14.505 ANA Change Notices: Not Supported 00:25:14.505 PLE Aggregate Log Change Notices: Not Supported 00:25:14.505 LBA Status Info Alert Notices: Not Supported 00:25:14.505 EGE Aggregate Log Change Notices: Not Supported 00:25:14.505 Normal NVM Subsystem Shutdown event: Not Supported 00:25:14.505 Zone Descriptor Change Notices: Not Supported 00:25:14.505 Discovery Log Change Notices: Not Supported 00:25:14.505 Controller Attributes 00:25:14.505 128-bit Host Identifier: Supported 00:25:14.505 Non-Operational Permissive Mode: Not Supported 00:25:14.505 NVM Sets: Not Supported 00:25:14.505 Read Recovery Levels: Not Supported 00:25:14.505 Endurance Groups: Not Supported 00:25:14.505 Predictable Latency Mode: Not Supported 00:25:14.505 Traffic Based Keep ALive: Not Supported 00:25:14.505 Namespace Granularity: Not Supported 00:25:14.505 SQ Associations: Not Supported 00:25:14.505 UUID List: Not Supported 00:25:14.505 Multi-Domain Subsystem: Not Supported 00:25:14.505 Fixed Capacity Management: Not Supported 00:25:14.505 Variable Capacity Management: Not Supported 00:25:14.505 Delete Endurance Group: Not Supported 00:25:14.505 Delete NVM Set: Not Supported 00:25:14.505 Extended LBA Formats Supported: Not Supported 00:25:14.505 Flexible Data Placement Supported: Not Supported 00:25:14.505 00:25:14.505 Controller Memory Buffer Support 00:25:14.505 ================================ 00:25:14.505 Supported: No 00:25:14.505 00:25:14.505 Persistent Memory Region Support 00:25:14.505 ================================ 00:25:14.505 Supported: No 00:25:14.505 00:25:14.505 Admin Command Set Attributes 00:25:14.505 ============================ 00:25:14.505 Security Send/Receive: Not Supported 00:25:14.505 Format NVM: Not Supported 00:25:14.505 Firmware Activate/Download: Not Supported 00:25:14.505 Namespace Management: Not Supported 00:25:14.505 Device Self-Test: Not Supported 00:25:14.505 Directives: Not Supported 00:25:14.505 NVMe-MI: Not Supported 00:25:14.505 Virtualization Management: Not Supported 00:25:14.505 Doorbell Buffer Config: Not Supported 00:25:14.505 Get LBA Status Capability: Not Supported 00:25:14.505 Command & Feature Lockdown Capability: Not Supported 00:25:14.505 Abort Command Limit: 4 00:25:14.505 Async Event Request Limit: 4 00:25:14.505 Number of Firmware Slots: N/A 00:25:14.505 Firmware Slot 1 Read-Only: N/A 00:25:14.505 Firmware Activation Without Reset: N/A 00:25:14.505 Multiple Update Detection Support: N/A 00:25:14.505 Firmware Update Granularity: No Information Provided 00:25:14.505 Per-Namespace SMART Log: No 00:25:14.505 Asymmetric Namespace Access Log Page: Not Supported 00:25:14.505 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:25:14.505 Command Effects Log Page: Supported 00:25:14.505 Get Log Page Extended Data: Supported 00:25:14.505 Telemetry Log Pages: Not Supported 00:25:14.505 Persistent Event Log Pages: Not Supported 00:25:14.505 Supported Log Pages Log Page: May Support 00:25:14.505 Commands Supported & Effects Log Page: Not Supported 00:25:14.505 Feature Identifiers & Effects Log Page:May Support 00:25:14.505 NVMe-MI Commands & Effects Log Page: May Support 00:25:14.505 Data Area 4 for Telemetry Log: Not Supported 00:25:14.505 Error Log Page Entries Supported: 128 00:25:14.505 Keep Alive: Supported 00:25:14.505 Keep Alive Granularity: 10000 ms 00:25:14.505 00:25:14.505 NVM Command Set Attributes 00:25:14.505 ========================== 00:25:14.505 Submission Queue Entry Size 00:25:14.505 Max: 64 00:25:14.505 Min: 64 00:25:14.505 Completion Queue Entry Size 00:25:14.505 Max: 16 00:25:14.505 Min: 16 00:25:14.505 Number of Namespaces: 32 00:25:14.505 Compare Command: Supported 00:25:14.505 Write Uncorrectable Command: Not Supported 00:25:14.505 Dataset Management Command: Supported 00:25:14.505 Write Zeroes Command: Supported 00:25:14.505 Set Features Save Field: Not Supported 00:25:14.505 Reservations: Supported 00:25:14.505 Timestamp: Not Supported 00:25:14.505 Copy: Supported 00:25:14.505 Volatile Write Cache: Present 00:25:14.505 Atomic Write Unit (Normal): 1 00:25:14.505 Atomic Write Unit (PFail): 1 00:25:14.505 Atomic Compare & Write Unit: 1 00:25:14.505 Fused Compare & Write: Supported 00:25:14.505 Scatter-Gather List 00:25:14.505 SGL Command Set: Supported 00:25:14.505 SGL Keyed: Supported 00:25:14.505 SGL Bit Bucket Descriptor: Not Supported 00:25:14.505 SGL Metadata Pointer: Not Supported 00:25:14.505 Oversized SGL: Not Supported 00:25:14.505 SGL Metadata Address: Not Supported 00:25:14.505 SGL Offset: Supported 00:25:14.505 Transport SGL Data Block: Not Supported 00:25:14.505 Replay Protected Memory Block: Not Supported 00:25:14.505 00:25:14.505 Firmware Slot Information 00:25:14.505 ========================= 00:25:14.505 Active slot: 1 00:25:14.505 Slot 1 Firmware Revision: 25.01 00:25:14.505 00:25:14.505 00:25:14.505 Commands Supported and Effects 00:25:14.505 ============================== 00:25:14.505 Admin Commands 00:25:14.505 -------------- 00:25:14.505 Get Log Page (02h): Supported 00:25:14.505 Identify (06h): Supported 00:25:14.505 Abort (08h): Supported 00:25:14.505 Set Features (09h): Supported 00:25:14.505 Get Features (0Ah): Supported 00:25:14.505 Asynchronous Event Request (0Ch): Supported 00:25:14.505 Keep Alive (18h): Supported 00:25:14.505 I/O Commands 00:25:14.505 ------------ 00:25:14.505 Flush (00h): Supported LBA-Change 00:25:14.505 Write (01h): Supported LBA-Change 00:25:14.505 Read (02h): Supported 00:25:14.505 Compare (05h): Supported 00:25:14.505 Write Zeroes (08h): Supported LBA-Change 00:25:14.505 Dataset Management (09h): Supported LBA-Change 00:25:14.505 Copy (19h): Supported LBA-Change 00:25:14.505 00:25:14.505 Error Log 00:25:14.505 ========= 00:25:14.505 00:25:14.505 Arbitration 00:25:14.505 =========== 00:25:14.505 Arbitration Burst: 1 00:25:14.505 00:25:14.505 Power Management 00:25:14.505 ================ 00:25:14.505 Number of Power States: 1 00:25:14.505 Current Power State: Power State #0 00:25:14.505 Power State #0: 00:25:14.505 Max Power: 0.00 W 00:25:14.505 Non-Operational State: Operational 00:25:14.505 Entry Latency: Not Reported 00:25:14.505 Exit Latency: Not Reported 00:25:14.505 Relative Read Throughput: 0 00:25:14.505 Relative Read Latency: 0 00:25:14.505 Relative Write Throughput: 0 00:25:14.505 Relative Write Latency: 0 00:25:14.505 Idle Power: Not Reported 00:25:14.505 Active Power: Not Reported 00:25:14.505 Non-Operational Permissive Mode: Not Supported 00:25:14.505 00:25:14.505 Health Information 00:25:14.505 ================== 00:25:14.505 Critical Warnings: 00:25:14.505 Available Spare Space: OK 00:25:14.505 Temperature: OK 00:25:14.505 Device Reliability: OK 00:25:14.505 Read Only: No 00:25:14.505 Volatile Memory Backup: OK 00:25:14.505 Current Temperature: 0 Kelvin (-273 Celsius) 00:25:14.505 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:25:14.505 Available Spare: 0% 00:25:14.505 Available Spare Threshold: 0% 00:25:14.505 Life Percentage Used:[2024-11-20 08:22:28.464879] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.505 [2024-11-20 08:22:28.464883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x16b6690) 00:25:14.506 [2024-11-20 08:22:28.464889] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.506 [2024-11-20 08:22:28.464900] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718b80, cid 7, qid 0 00:25:14.506 [2024-11-20 08:22:28.464973] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.506 [2024-11-20 08:22:28.464979] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.506 [2024-11-20 08:22:28.464982] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.464985] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718b80) on tqpair=0x16b6690 00:25:14.506 [2024-11-20 08:22:28.465012] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:25:14.506 [2024-11-20 08:22:28.465022] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718100) on tqpair=0x16b6690 00:25:14.506 [2024-11-20 08:22:28.465028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.506 [2024-11-20 08:22:28.465032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718280) on tqpair=0x16b6690 00:25:14.506 [2024-11-20 08:22:28.465036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.506 [2024-11-20 08:22:28.465040] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718400) on tqpair=0x16b6690 00:25:14.506 [2024-11-20 08:22:28.465044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.506 [2024-11-20 08:22:28.465048] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.506 [2024-11-20 08:22:28.465052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:14.506 [2024-11-20 08:22:28.465059] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.465062] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.465065] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.506 [2024-11-20 08:22:28.465071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.506 [2024-11-20 08:22:28.465082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.506 [2024-11-20 08:22:28.465166] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.506 [2024-11-20 08:22:28.465171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.506 [2024-11-20 08:22:28.465175] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.465178] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.506 [2024-11-20 08:22:28.465183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.465186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.465189] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.506 [2024-11-20 08:22:28.465198] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.506 [2024-11-20 08:22:28.465215] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.506 [2024-11-20 08:22:28.465318] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.506 [2024-11-20 08:22:28.465323] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.506 [2024-11-20 08:22:28.465326] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.465330] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.506 [2024-11-20 08:22:28.465334] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:25:14.506 [2024-11-20 08:22:28.465338] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:25:14.506 [2024-11-20 08:22:28.465346] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.465349] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.465352] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.506 [2024-11-20 08:22:28.465358] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.506 [2024-11-20 08:22:28.465367] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.506 [2024-11-20 08:22:28.465468] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.506 [2024-11-20 08:22:28.465474] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.506 [2024-11-20 08:22:28.465477] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.465480] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.506 [2024-11-20 08:22:28.465488] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.465492] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.465495] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.506 [2024-11-20 08:22:28.465500] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.506 [2024-11-20 08:22:28.465509] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.506 [2024-11-20 08:22:28.465574] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.506 [2024-11-20 08:22:28.465579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.506 [2024-11-20 08:22:28.465582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.465585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.506 [2024-11-20 08:22:28.465593] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.465597] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.465600] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.506 [2024-11-20 08:22:28.465605] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.506 [2024-11-20 08:22:28.465614] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.506 [2024-11-20 08:22:28.465721] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.506 [2024-11-20 08:22:28.465727] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.506 [2024-11-20 08:22:28.465730] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.465733] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.506 [2024-11-20 08:22:28.465741] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.465746] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.465749] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.506 [2024-11-20 08:22:28.465755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.506 [2024-11-20 08:22:28.465765] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.506 [2024-11-20 08:22:28.465872] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.506 [2024-11-20 08:22:28.465878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.506 [2024-11-20 08:22:28.465881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.465884] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.506 [2024-11-20 08:22:28.465892] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.465895] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.465898] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.506 [2024-11-20 08:22:28.465904] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.506 [2024-11-20 08:22:28.465912] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.506 [2024-11-20 08:22:28.466022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.506 [2024-11-20 08:22:28.466028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.506 [2024-11-20 08:22:28.466031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.466034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.506 [2024-11-20 08:22:28.466042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.466045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.466048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.506 [2024-11-20 08:22:28.466054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.506 [2024-11-20 08:22:28.466063] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.506 [2024-11-20 08:22:28.466124] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.506 [2024-11-20 08:22:28.466129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.506 [2024-11-20 08:22:28.466132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.466135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.506 [2024-11-20 08:22:28.466143] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.466146] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.466149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.506 [2024-11-20 08:22:28.466155] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.506 [2024-11-20 08:22:28.466164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.506 [2024-11-20 08:22:28.466275] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.506 [2024-11-20 08:22:28.466281] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.506 [2024-11-20 08:22:28.466284] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.466287] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.506 [2024-11-20 08:22:28.466295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.466299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.506 [2024-11-20 08:22:28.466303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.506 [2024-11-20 08:22:28.466309] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.506 [2024-11-20 08:22:28.466318] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.506 [2024-11-20 08:22:28.466426] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.506 [2024-11-20 08:22:28.466432] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.507 [2024-11-20 08:22:28.466435] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.466438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.507 [2024-11-20 08:22:28.466446] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.466449] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.466452] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.507 [2024-11-20 08:22:28.466458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.507 [2024-11-20 08:22:28.466467] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.507 [2024-11-20 08:22:28.466577] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.507 [2024-11-20 08:22:28.466582] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.507 [2024-11-20 08:22:28.466585] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.466588] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.507 [2024-11-20 08:22:28.466596] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.466599] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.466602] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.507 [2024-11-20 08:22:28.466608] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.507 [2024-11-20 08:22:28.466617] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.507 [2024-11-20 08:22:28.466681] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.507 [2024-11-20 08:22:28.466686] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.507 [2024-11-20 08:22:28.466689] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.466692] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.507 [2024-11-20 08:22:28.466700] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.466703] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.466706] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.507 [2024-11-20 08:22:28.466712] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.507 [2024-11-20 08:22:28.466721] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.507 [2024-11-20 08:22:28.466829] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.507 [2024-11-20 08:22:28.466835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.507 [2024-11-20 08:22:28.466838] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.466841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.507 [2024-11-20 08:22:28.466849] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.466852] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.466855] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.507 [2024-11-20 08:22:28.466862] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.507 [2024-11-20 08:22:28.466872] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.507 [2024-11-20 08:22:28.466980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.507 [2024-11-20 08:22:28.466986] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.507 [2024-11-20 08:22:28.466988] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.466992] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.507 [2024-11-20 08:22:28.467000] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.467003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.467006] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.507 [2024-11-20 08:22:28.467011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.507 [2024-11-20 08:22:28.467021] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.507 [2024-11-20 08:22:28.467132] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.507 [2024-11-20 08:22:28.467137] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.507 [2024-11-20 08:22:28.467140] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.467143] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.507 [2024-11-20 08:22:28.467151] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.467154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.467157] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.507 [2024-11-20 08:22:28.467163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.507 [2024-11-20 08:22:28.467172] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.507 [2024-11-20 08:22:28.467244] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.507 [2024-11-20 08:22:28.467249] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.507 [2024-11-20 08:22:28.467252] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.467256] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.507 [2024-11-20 08:22:28.467264] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.467268] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.467271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.507 [2024-11-20 08:22:28.467276] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.507 [2024-11-20 08:22:28.467286] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.507 [2024-11-20 08:22:28.467385] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.507 [2024-11-20 08:22:28.467391] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.507 [2024-11-20 08:22:28.467394] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.467397] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.507 [2024-11-20 08:22:28.467405] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.467408] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.467411] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.507 [2024-11-20 08:22:28.467417] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.507 [2024-11-20 08:22:28.467428] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.507 [2024-11-20 08:22:28.467536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.507 [2024-11-20 08:22:28.467541] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.507 [2024-11-20 08:22:28.467544] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.467547] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.507 [2024-11-20 08:22:28.467555] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.467559] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.467562] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.507 [2024-11-20 08:22:28.467567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.507 [2024-11-20 08:22:28.467577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.507 [2024-11-20 08:22:28.467685] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.507 [2024-11-20 08:22:28.467691] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.507 [2024-11-20 08:22:28.467694] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.467697] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.507 [2024-11-20 08:22:28.467705] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.467708] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.467711] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.507 [2024-11-20 08:22:28.467717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.507 [2024-11-20 08:22:28.467726] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.507 [2024-11-20 08:22:28.467787] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.507 [2024-11-20 08:22:28.467792] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.507 [2024-11-20 08:22:28.467795] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.507 [2024-11-20 08:22:28.467798] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.508 [2024-11-20 08:22:28.467806] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.508 [2024-11-20 08:22:28.467809] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.508 [2024-11-20 08:22:28.467812] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.508 [2024-11-20 08:22:28.467818] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.508 [2024-11-20 08:22:28.467827] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.508 [2024-11-20 08:22:28.467908] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.508 [2024-11-20 08:22:28.467913] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.508 [2024-11-20 08:22:28.467916] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.508 [2024-11-20 08:22:28.467920] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.508 [2024-11-20 08:22:28.467928] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.508 [2024-11-20 08:22:28.467931] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.508 [2024-11-20 08:22:28.467934] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.508 [2024-11-20 08:22:28.467940] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.508 [2024-11-20 08:22:28.467949] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.508 [2024-11-20 08:22:28.468027] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.508 [2024-11-20 08:22:28.468033] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.508 [2024-11-20 08:22:28.468036] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.508 [2024-11-20 08:22:28.468039] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.508 [2024-11-20 08:22:28.468047] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.508 [2024-11-20 08:22:28.468050] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.508 [2024-11-20 08:22:28.468053] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.508 [2024-11-20 08:22:28.468059] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.508 [2024-11-20 08:22:28.468068] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.508 [2024-11-20 08:22:28.468125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.508 [2024-11-20 08:22:28.468131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.508 [2024-11-20 08:22:28.468134] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.508 [2024-11-20 08:22:28.468137] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.508 [2024-11-20 08:22:28.468145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.508 [2024-11-20 08:22:28.468148] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.508 [2024-11-20 08:22:28.468151] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.508 [2024-11-20 08:22:28.468157] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.508 [2024-11-20 08:22:28.468166] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.508 [2024-11-20 08:22:28.472211] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.508 [2024-11-20 08:22:28.472219] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.508 [2024-11-20 08:22:28.472222] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.508 [2024-11-20 08:22:28.472225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.508 [2024-11-20 08:22:28.472235] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:25:14.508 [2024-11-20 08:22:28.472239] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:25:14.508 [2024-11-20 08:22:28.472242] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x16b6690) 00:25:14.508 [2024-11-20 08:22:28.472248] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:14.508 [2024-11-20 08:22:28.472259] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1718580, cid 3, qid 0 00:25:14.508 [2024-11-20 08:22:28.472411] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:25:14.508 [2024-11-20 08:22:28.472417] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:25:14.508 [2024-11-20 08:22:28.472420] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:25:14.508 [2024-11-20 08:22:28.472423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1718580) on tqpair=0x16b6690 00:25:14.508 [2024-11-20 08:22:28.472429] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:25:14.508 0% 00:25:14.508 Data Units Read: 0 00:25:14.508 Data Units Written: 0 00:25:14.508 Host Read Commands: 0 00:25:14.508 Host Write Commands: 0 00:25:14.508 Controller Busy Time: 0 minutes 00:25:14.508 Power Cycles: 0 00:25:14.508 Power On Hours: 0 hours 00:25:14.508 Unsafe Shutdowns: 0 00:25:14.508 Unrecoverable Media Errors: 0 00:25:14.508 Lifetime Error Log Entries: 0 00:25:14.508 Warning Temperature Time: 0 minutes 00:25:14.508 Critical Temperature Time: 0 minutes 00:25:14.508 00:25:14.508 Number of Queues 00:25:14.508 ================ 00:25:14.508 Number of I/O Submission Queues: 127 00:25:14.508 Number of I/O Completion Queues: 127 00:25:14.508 00:25:14.508 Active Namespaces 00:25:14.508 ================= 00:25:14.508 Namespace ID:1 00:25:14.508 Error Recovery Timeout: Unlimited 00:25:14.508 Command Set Identifier: NVM (00h) 00:25:14.508 Deallocate: Supported 00:25:14.508 Deallocated/Unwritten Error: Not Supported 00:25:14.508 Deallocated Read Value: Unknown 00:25:14.508 Deallocate in Write Zeroes: Not Supported 00:25:14.508 Deallocated Guard Field: 0xFFFF 00:25:14.508 Flush: Supported 00:25:14.508 Reservation: Supported 00:25:14.508 Namespace Sharing Capabilities: Multiple Controllers 00:25:14.508 Size (in LBAs): 131072 (0GiB) 00:25:14.508 Capacity (in LBAs): 131072 (0GiB) 00:25:14.508 Utilization (in LBAs): 131072 (0GiB) 00:25:14.508 NGUID: ABCDEF0123456789ABCDEF0123456789 00:25:14.508 EUI64: ABCDEF0123456789 00:25:14.508 UUID: a464ad1b-5def-4ac5-9d81-fb23ef9bc762 00:25:14.508 Thin Provisioning: Not Supported 00:25:14.508 Per-NS Atomic Units: Yes 00:25:14.508 Atomic Boundary Size (Normal): 0 00:25:14.508 Atomic Boundary Size (PFail): 0 00:25:14.508 Atomic Boundary Offset: 0 00:25:14.508 Maximum Single Source Range Length: 65535 00:25:14.508 Maximum Copy Length: 65535 00:25:14.508 Maximum Source Range Count: 1 00:25:14.508 NGUID/EUI64 Never Reused: No 00:25:14.508 Namespace Write Protected: No 00:25:14.508 Number of LBA Formats: 1 00:25:14.508 Current LBA Format: LBA Format #00 00:25:14.508 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:14.508 00:25:14.508 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:25:14.508 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:14.508 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.508 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:14.508 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.508 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:25:14.508 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:25:14.508 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:14.508 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@99 -- # sync 00:25:14.508 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:14.508 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # set +e 00:25:14.508 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:14.508 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:14.508 rmmod nvme_tcp 00:25:14.768 rmmod nvme_fabrics 00:25:14.768 rmmod nvme_keyring 00:25:14.768 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:14.768 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # set -e 00:25:14.768 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # return 0 00:25:14.768 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # '[' -n 1781048 ']' 00:25:14.768 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@337 -- # killprocess 1781048 00:25:14.768 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 1781048 ']' 00:25:14.768 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 1781048 00:25:14.768 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:25:14.768 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:14.768 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1781048 00:25:14.768 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:14.768 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:14.768 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1781048' 00:25:14.768 killing process with pid 1781048 00:25:14.768 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 1781048 00:25:14.768 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 1781048 00:25:15.027 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:15.027 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # nvmf_fini 00:25:15.027 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@254 -- # local dev 00:25:15.027 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:15.027 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:15.027 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:15.027 08:22:28 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@121 -- # return 0 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # _dev=0 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # dev_map=() 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@274 -- # iptr 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # iptables-save 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@548 -- # iptables-restore 00:25:16.933 00:25:16.933 real 0m9.431s 00:25:16.933 user 0m5.502s 00:25:16.933 sys 0m4.865s 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:25:16.933 ************************************ 00:25:16.933 END TEST nvmf_identify 00:25:16.933 ************************************ 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:16.933 08:22:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.193 ************************************ 00:25:17.193 START TEST nvmf_perf 00:25:17.193 ************************************ 00:25:17.193 08:22:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:25:17.193 * Looking for test storage... 00:25:17.193 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:17.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.193 --rc genhtml_branch_coverage=1 00:25:17.193 --rc genhtml_function_coverage=1 00:25:17.193 --rc genhtml_legend=1 00:25:17.193 --rc geninfo_all_blocks=1 00:25:17.193 --rc geninfo_unexecuted_blocks=1 00:25:17.193 00:25:17.193 ' 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:17.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.193 --rc genhtml_branch_coverage=1 00:25:17.193 --rc genhtml_function_coverage=1 00:25:17.193 --rc genhtml_legend=1 00:25:17.193 --rc geninfo_all_blocks=1 00:25:17.193 --rc geninfo_unexecuted_blocks=1 00:25:17.193 00:25:17.193 ' 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:17.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.193 --rc genhtml_branch_coverage=1 00:25:17.193 --rc genhtml_function_coverage=1 00:25:17.193 --rc genhtml_legend=1 00:25:17.193 --rc geninfo_all_blocks=1 00:25:17.193 --rc geninfo_unexecuted_blocks=1 00:25:17.193 00:25:17.193 ' 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:17.193 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:17.193 --rc genhtml_branch_coverage=1 00:25:17.193 --rc genhtml_function_coverage=1 00:25:17.193 --rc genhtml_legend=1 00:25:17.193 --rc geninfo_all_blocks=1 00:25:17.193 --rc geninfo_unexecuted_blocks=1 00:25:17.193 00:25:17.193 ' 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:17.193 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@50 -- # : 0 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:17.194 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # remove_target_ns 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # xtrace_disable 00:25:17.194 08:22:31 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@131 -- # pci_devs=() 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@131 -- # local -a pci_devs 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@133 -- # pci_drivers=() 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@135 -- # net_devs=() 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@135 -- # local -ga net_devs 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@136 -- # e810=() 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@136 -- # local -ga e810 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@137 -- # x722=() 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@137 -- # local -ga x722 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@138 -- # mlx=() 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@138 -- # local -ga mlx 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:23.766 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:23.767 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:23.767 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:23.767 Found net devices under 0000:86:00.0: cvl_0_0 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:23.767 Found net devices under 0000:86:00.1: cvl_0_1 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # is_hw=yes 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@247 -- # create_target_ns 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@28 -- # local -g _dev 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # ips=() 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772161 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:25:23.767 10.0.0.1 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772162 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:25:23.767 10.0.0.2 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:25:23.767 08:22:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:23.767 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:25:23.767 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:25:23.767 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:25:23.767 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:25:23.767 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@38 -- # ping_ips 1 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:23.768 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:23.768 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.438 ms 00:25:23.768 00:25:23.768 --- 10.0.0.1 ping statistics --- 00:25:23.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.768 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target0 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:23.768 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:23.768 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:25:23.768 00:25:23.768 --- 10.0.0.2 ping statistics --- 00:25:23.768 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:23.768 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # return 0 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # return 1 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev= 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@160 -- # return 0 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target0 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:23.768 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # local dev=target1 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # return 1 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@159 -- # dev= 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@160 -- # return 0 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:25:23.769 ' 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # nvmfpid=1784830 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # waitforlisten 1784830 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 1784830 ']' 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:23.769 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:23.769 08:22:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:23.769 [2024-11-20 08:22:37.307335] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:25:23.769 [2024-11-20 08:22:37.307388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:23.769 [2024-11-20 08:22:37.389006] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:23.769 [2024-11-20 08:22:37.432392] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:23.769 [2024-11-20 08:22:37.432430] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:23.769 [2024-11-20 08:22:37.432436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:23.769 [2024-11-20 08:22:37.432442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:23.769 [2024-11-20 08:22:37.432452] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:23.769 [2024-11-20 08:22:37.433997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:23.769 [2024-11-20 08:22:37.434039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:23.769 [2024-11-20 08:22:37.434154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:23.769 [2024-11-20 08:22:37.434155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:24.338 08:22:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:24.338 08:22:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:25:24.338 08:22:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:24.338 08:22:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:24.338 08:22:38 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:24.338 08:22:38 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:24.338 08:22:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:24.338 08:22:38 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:25:27.628 08:22:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:25:27.628 08:22:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:25:27.628 08:22:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:25:27.628 08:22:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:25:27.628 08:22:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:25:27.628 08:22:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:25:27.628 08:22:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:25:27.628 08:22:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:25:27.628 08:22:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:25:27.887 [2024-11-20 08:22:41.816764] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.887 08:22:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:28.145 08:22:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:28.145 08:22:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:28.404 08:22:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:25:28.404 08:22:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:28.663 08:22:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:28.663 [2024-11-20 08:22:42.635823] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:28.663 08:22:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:28.922 08:22:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:25:28.922 08:22:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:25:28.922 08:22:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:25:28.922 08:22:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:25:30.300 Initializing NVMe Controllers 00:25:30.300 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:25:30.300 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:25:30.300 Initialization complete. Launching workers. 00:25:30.300 ======================================================== 00:25:30.300 Latency(us) 00:25:30.300 Device Information : IOPS MiB/s Average min max 00:25:30.300 PCIE (0000:5e:00.0) NSID 1 from core 0: 98701.74 385.55 323.59 34.54 6196.52 00:25:30.300 ======================================================== 00:25:30.300 Total : 98701.74 385.55 323.59 34.54 6196.52 00:25:30.300 00:25:30.300 08:22:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:31.679 Initializing NVMe Controllers 00:25:31.679 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:31.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:31.679 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:31.679 Initialization complete. Launching workers. 00:25:31.679 ======================================================== 00:25:31.679 Latency(us) 00:25:31.679 Device Information : IOPS MiB/s Average min max 00:25:31.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 112.00 0.44 9132.56 103.18 44933.92 00:25:31.679 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17924.00 6987.59 47886.93 00:25:31.679 ======================================================== 00:25:31.679 Total : 168.00 0.66 12063.04 103.18 47886.93 00:25:31.679 00:25:31.679 08:22:45 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:33.140 Initializing NVMe Controllers 00:25:33.140 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:33.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:33.140 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:33.140 Initialization complete. Launching workers. 00:25:33.140 ======================================================== 00:25:33.140 Latency(us) 00:25:33.140 Device Information : IOPS MiB/s Average min max 00:25:33.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11240.00 43.91 2850.22 371.22 6203.04 00:25:33.140 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3946.00 15.41 8143.01 6797.38 16063.96 00:25:33.140 ======================================================== 00:25:33.140 Total : 15186.00 59.32 4225.52 371.22 16063.96 00:25:33.140 00:25:33.140 08:22:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:25:33.140 08:22:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:25:33.140 08:22:46 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:35.726 Initializing NVMe Controllers 00:25:35.726 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:35.726 Controller IO queue size 128, less than required. 00:25:35.726 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:35.726 Controller IO queue size 128, less than required. 00:25:35.726 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:35.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:35.726 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:35.726 Initialization complete. Launching workers. 00:25:35.726 ======================================================== 00:25:35.726 Latency(us) 00:25:35.726 Device Information : IOPS MiB/s Average min max 00:25:35.726 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1786.97 446.74 72818.32 46419.08 129473.06 00:25:35.726 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 604.99 151.25 215689.88 65477.98 332953.79 00:25:35.726 ======================================================== 00:25:35.726 Total : 2391.96 597.99 108954.31 46419.08 332953.79 00:25:35.726 00:25:35.726 08:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:25:35.726 No valid NVMe controllers or AIO or URING devices found 00:25:35.726 Initializing NVMe Controllers 00:25:35.726 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:35.726 Controller IO queue size 128, less than required. 00:25:35.726 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:35.726 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:25:35.726 Controller IO queue size 128, less than required. 00:25:35.726 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:35.726 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:25:35.726 WARNING: Some requested NVMe devices were skipped 00:25:35.726 08:22:49 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:25:38.265 Initializing NVMe Controllers 00:25:38.265 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:38.265 Controller IO queue size 128, less than required. 00:25:38.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:38.265 Controller IO queue size 128, less than required. 00:25:38.265 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:25:38.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:25:38.265 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:25:38.265 Initialization complete. Launching workers. 00:25:38.265 00:25:38.265 ==================== 00:25:38.265 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:25:38.265 TCP transport: 00:25:38.265 polls: 12219 00:25:38.265 idle_polls: 8906 00:25:38.265 sock_completions: 3313 00:25:38.265 nvme_completions: 6093 00:25:38.265 submitted_requests: 9140 00:25:38.265 queued_requests: 1 00:25:38.265 00:25:38.265 ==================== 00:25:38.265 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:25:38.265 TCP transport: 00:25:38.265 polls: 11767 00:25:38.265 idle_polls: 7852 00:25:38.265 sock_completions: 3915 00:25:38.265 nvme_completions: 6825 00:25:38.265 submitted_requests: 10262 00:25:38.265 queued_requests: 1 00:25:38.265 ======================================================== 00:25:38.265 Latency(us) 00:25:38.265 Device Information : IOPS MiB/s Average min max 00:25:38.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1521.28 380.32 86033.55 39698.40 158991.08 00:25:38.265 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1704.07 426.02 74995.18 47108.76 102961.56 00:25:38.265 ======================================================== 00:25:38.265 Total : 3225.35 806.34 80201.57 39698.40 158991.08 00:25:38.265 00:25:38.265 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:25:38.265 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:38.265 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:25:38.265 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:25:38.265 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:25:38.265 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:38.265 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@99 -- # sync 00:25:38.265 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:38.265 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # set +e 00:25:38.265 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:38.265 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:38.265 rmmod nvme_tcp 00:25:38.524 rmmod nvme_fabrics 00:25:38.524 rmmod nvme_keyring 00:25:38.524 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:38.524 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # set -e 00:25:38.524 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # return 0 00:25:38.524 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # '[' -n 1784830 ']' 00:25:38.524 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@337 -- # killprocess 1784830 00:25:38.524 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 1784830 ']' 00:25:38.524 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 1784830 00:25:38.524 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:25:38.524 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:38.524 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1784830 00:25:38.524 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:38.524 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:38.524 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1784830' 00:25:38.524 killing process with pid 1784830 00:25:38.524 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 1784830 00:25:38.524 08:22:52 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 1784830 00:25:41.062 08:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:41.062 08:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # nvmf_fini 00:25:41.062 08:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@254 -- # local dev 00:25:41.062 08:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:41.062 08:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:41.062 08:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:41.062 08:22:54 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@121 -- # return 0 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # _dev=0 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # dev_map=() 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@274 -- # iptr 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # iptables-save 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@548 -- # iptables-restore 00:25:42.969 00:25:42.969 real 0m25.614s 00:25:42.969 user 1m8.174s 00:25:42.969 sys 0m8.333s 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:25:42.969 ************************************ 00:25:42.969 END TEST nvmf_perf 00:25:42.969 ************************************ 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.969 ************************************ 00:25:42.969 START TEST nvmf_fio_host 00:25:42.969 ************************************ 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:25:42.969 * Looking for test storage... 00:25:42.969 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:42.969 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:42.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.970 --rc genhtml_branch_coverage=1 00:25:42.970 --rc genhtml_function_coverage=1 00:25:42.970 --rc genhtml_legend=1 00:25:42.970 --rc geninfo_all_blocks=1 00:25:42.970 --rc geninfo_unexecuted_blocks=1 00:25:42.970 00:25:42.970 ' 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:42.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.970 --rc genhtml_branch_coverage=1 00:25:42.970 --rc genhtml_function_coverage=1 00:25:42.970 --rc genhtml_legend=1 00:25:42.970 --rc geninfo_all_blocks=1 00:25:42.970 --rc geninfo_unexecuted_blocks=1 00:25:42.970 00:25:42.970 ' 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:42.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.970 --rc genhtml_branch_coverage=1 00:25:42.970 --rc genhtml_function_coverage=1 00:25:42.970 --rc genhtml_legend=1 00:25:42.970 --rc geninfo_all_blocks=1 00:25:42.970 --rc geninfo_unexecuted_blocks=1 00:25:42.970 00:25:42.970 ' 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:42.970 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:42.970 --rc genhtml_branch_coverage=1 00:25:42.970 --rc genhtml_function_coverage=1 00:25:42.970 --rc genhtml_legend=1 00:25:42.970 --rc geninfo_all_blocks=1 00:25:42.970 --rc geninfo_unexecuted_blocks=1 00:25:42.970 00:25:42.970 ' 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.970 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@50 -- # : 0 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:42.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # remove_target_ns 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # xtrace_disable 00:25:42.971 08:22:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@131 -- # pci_devs=() 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@131 -- # local -a pci_devs 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@132 -- # pci_net_devs=() 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@133 -- # pci_drivers=() 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@133 -- # local -A pci_drivers 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@135 -- # net_devs=() 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@135 -- # local -ga net_devs 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@136 -- # e810=() 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@136 -- # local -ga e810 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@137 -- # x722=() 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@137 -- # local -ga x722 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@138 -- # mlx=() 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@138 -- # local -ga mlx 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:49.543 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:49.543 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:49.543 Found net devices under 0000:86:00.0: cvl_0_0 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:49.543 Found net devices under 0000:86:00.1: cvl_0_1 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # is_hw=yes 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@247 -- # create_target_ns 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@28 -- # local -g _dev 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # ips=() 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:25:49.543 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772161 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:25:49.544 10.0.0.1 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772162 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:25:49.544 10.0.0.2 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@38 -- # ping_ips 1 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:49.544 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:49.544 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.489 ms 00:25:49.544 00:25:49.544 --- 10.0.0.1 ping statistics --- 00:25:49.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.544 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target0 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:25:49.544 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:49.544 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:25:49.544 00:25:49.544 --- 10.0.0.2 ping statistics --- 00:25:49.544 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:49.544 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # return 0 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:49.544 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # return 1 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev= 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@160 -- # return 0 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target0 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # local dev=target1 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # return 1 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@159 -- # dev= 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@160 -- # return 0 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:25:49.545 ' 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1791189 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1791189 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 1791189 ']' 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:49.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:49.545 08:23:02 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.545 [2024-11-20 08:23:02.998957] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:25:49.545 [2024-11-20 08:23:02.999006] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:49.545 [2024-11-20 08:23:03.076248] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:49.545 [2024-11-20 08:23:03.118667] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:49.545 [2024-11-20 08:23:03.118703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:49.545 [2024-11-20 08:23:03.118711] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:49.545 [2024-11-20 08:23:03.118717] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:49.545 [2024-11-20 08:23:03.118723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:49.545 [2024-11-20 08:23:03.120320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:49.545 [2024-11-20 08:23:03.120430] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:49.545 [2024-11-20 08:23:03.120539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:49.545 [2024-11-20 08:23:03.120540] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:50.113 08:23:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:50.113 08:23:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:25:50.113 08:23:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:25:50.113 [2024-11-20 08:23:04.028161] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:50.113 08:23:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:25:50.113 08:23:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:50.113 08:23:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.113 08:23:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:25:50.373 Malloc1 00:25:50.373 08:23:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:50.632 08:23:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:50.891 08:23:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:50.891 [2024-11-20 08:23:04.877388] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:50.891 08:23:04 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:51.151 08:23:05 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:25:51.409 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:25:51.409 fio-3.35 00:25:51.409 Starting 1 thread 00:25:53.946 [2024-11-20 08:23:07.774638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19623d0 is same with the state(6) to be set 00:25:53.946 [2024-11-20 08:23:07.774694] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19623d0 is same with the state(6) to be set 00:25:53.946 [2024-11-20 08:23:07.774703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19623d0 is same with the state(6) to be set 00:25:53.946 [2024-11-20 08:23:07.774710] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19623d0 is same with the state(6) to be set 00:25:53.946 [2024-11-20 08:23:07.774716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19623d0 is same with the state(6) to be set 00:25:53.946 [2024-11-20 08:23:07.774722] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19623d0 is same with the state(6) to be set 00:25:53.946 [2024-11-20 08:23:07.774728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19623d0 is same with the state(6) to be set 00:25:53.946 [2024-11-20 08:23:07.774734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19623d0 is same with the state(6) to be set 00:25:53.946 [2024-11-20 08:23:07.774739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19623d0 is same with the state(6) to be set 00:25:53.946 [2024-11-20 08:23:07.774745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19623d0 is same with the state(6) to be set 00:25:53.946 [2024-11-20 08:23:07.774751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19623d0 is same with the state(6) to be set 00:25:53.946 [2024-11-20 08:23:07.774757] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19623d0 is same with the state(6) to be set 00:25:53.946 00:25:53.946 test: (groupid=0, jobs=1): err= 0: pid=1791792: Wed Nov 20 08:23:07 2024 00:25:53.946 read: IOPS=11.9k, BW=46.4MiB/s (48.6MB/s)(93.0MiB/2005msec) 00:25:53.946 slat (nsec): min=1526, max=260953, avg=1713.22, stdev=2272.37 00:25:53.946 clat (usec): min=3073, max=10905, avg=5955.80, stdev=461.05 00:25:53.946 lat (usec): min=3104, max=10906, avg=5957.52, stdev=460.97 00:25:53.946 clat percentiles (usec): 00:25:53.946 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5407], 20.00th=[ 5604], 00:25:53.946 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5932], 60.00th=[ 6063], 00:25:53.946 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6521], 95.00th=[ 6652], 00:25:53.946 | 99.00th=[ 6980], 99.50th=[ 7111], 99.90th=[ 8455], 99.95th=[ 8979], 00:25:53.946 | 99.99th=[10290] 00:25:53.946 bw ( KiB/s): min=46930, max=47960, per=99.92%, avg=47442.50, stdev=463.92, samples=4 00:25:53.946 iops : min=11732, max=11990, avg=11860.50, stdev=116.17, samples=4 00:25:53.946 write: IOPS=11.8k, BW=46.2MiB/s (48.4MB/s)(92.6MiB/2005msec); 0 zone resets 00:25:53.946 slat (nsec): min=1563, max=224456, avg=1773.30, stdev=1634.33 00:25:53.946 clat (usec): min=2427, max=9053, avg=4828.06, stdev=385.54 00:25:53.946 lat (usec): min=2442, max=9055, avg=4829.83, stdev=385.58 00:25:53.946 clat percentiles (usec): 00:25:53.946 | 1.00th=[ 3916], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4555], 00:25:53.946 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4883], 00:25:53.946 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5276], 95.00th=[ 5407], 00:25:53.946 | 99.00th=[ 5669], 99.50th=[ 5866], 99.90th=[ 7504], 99.95th=[ 8455], 00:25:53.946 | 99.99th=[ 8979] 00:25:53.946 bw ( KiB/s): min=46784, max=47560, per=99.94%, avg=47240.00, stdev=369.45, samples=4 00:25:53.946 iops : min=11696, max=11890, avg=11810.00, stdev=92.36, samples=4 00:25:53.946 lat (msec) : 4=0.78%, 10=99.21%, 20=0.01% 00:25:53.946 cpu : usr=73.35%, sys=25.60%, ctx=124, majf=0, minf=3 00:25:53.946 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:25:53.946 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:53.946 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:53.946 issued rwts: total=23799,23693,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:53.946 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:53.946 00:25:53.946 Run status group 0 (all jobs): 00:25:53.946 READ: bw=46.4MiB/s (48.6MB/s), 46.4MiB/s-46.4MiB/s (48.6MB/s-48.6MB/s), io=93.0MiB (97.5MB), run=2005-2005msec 00:25:53.946 WRITE: bw=46.2MiB/s (48.4MB/s), 46.2MiB/s-46.2MiB/s (48.4MB/s-48.4MB/s), io=92.6MiB (97.0MB), run=2005-2005msec 00:25:53.946 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:53.946 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:53.946 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:25:53.946 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:53.946 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:25:53.946 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:53.946 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:25:53.946 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:25:53.946 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:53.946 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:53.946 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:25:53.946 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:53.947 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:53.947 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:53.947 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:25:53.947 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:25:53.947 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:25:53.947 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:25:53.947 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:25:53.947 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:25:53.947 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:25:53.947 08:23:07 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:25:54.206 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:25:54.206 fio-3.35 00:25:54.206 Starting 1 thread 00:25:55.142 [2024-11-20 08:23:08.932667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bf10 is same with the state(6) to be set 00:25:55.142 [2024-11-20 08:23:08.932716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x182bf10 is same with the state(6) to be set 00:25:56.520 00:25:56.520 test: (groupid=0, jobs=1): err= 0: pid=1792360: Wed Nov 20 08:23:10 2024 00:25:56.520 read: IOPS=11.0k, BW=171MiB/s (180MB/s)(344MiB/2008msec) 00:25:56.520 slat (nsec): min=2489, max=92195, avg=2799.85, stdev=1354.15 00:25:56.520 clat (usec): min=1088, max=13564, avg=6673.44, stdev=1595.74 00:25:56.520 lat (usec): min=1091, max=13579, avg=6676.24, stdev=1595.90 00:25:56.520 clat percentiles (usec): 00:25:56.520 | 1.00th=[ 3490], 5.00th=[ 4228], 10.00th=[ 4752], 20.00th=[ 5342], 00:25:56.520 | 30.00th=[ 5735], 40.00th=[ 6194], 50.00th=[ 6587], 60.00th=[ 6980], 00:25:56.520 | 70.00th=[ 7439], 80.00th=[ 7898], 90.00th=[ 8586], 95.00th=[ 9634], 00:25:56.520 | 99.00th=[10814], 99.50th=[11469], 99.90th=[13042], 99.95th=[13304], 00:25:56.520 | 99.99th=[13566] 00:25:56.520 bw ( KiB/s): min=80640, max=93312, per=51.00%, avg=89432.00, stdev=5928.87, samples=4 00:25:56.520 iops : min= 5040, max= 5832, avg=5589.50, stdev=370.55, samples=4 00:25:56.520 write: IOPS=6355, BW=99.3MiB/s (104MB/s)(182MiB/1833msec); 0 zone resets 00:25:56.520 slat (usec): min=29, max=387, avg=31.51, stdev= 8.47 00:25:56.520 clat (usec): min=3455, max=15409, avg=8629.78, stdev=1488.63 00:25:56.520 lat (usec): min=3484, max=15520, avg=8661.28, stdev=1490.53 00:25:56.520 clat percentiles (usec): 00:25:56.520 | 1.00th=[ 5800], 5.00th=[ 6456], 10.00th=[ 6915], 20.00th=[ 7373], 00:25:56.520 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:25:56.520 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10683], 95.00th=[11338], 00:25:56.520 | 99.00th=[12649], 99.50th=[13042], 99.90th=[15008], 99.95th=[15139], 00:25:56.520 | 99.99th=[15401] 00:25:56.520 bw ( KiB/s): min=85024, max=97024, per=91.25%, avg=92792.00, stdev=5316.58, samples=4 00:25:56.520 iops : min= 5314, max= 6064, avg=5799.50, stdev=332.29, samples=4 00:25:56.520 lat (msec) : 2=0.04%, 4=2.08%, 10=89.44%, 20=8.43% 00:25:56.520 cpu : usr=87.10%, sys=12.16%, ctx=55, majf=0, minf=3 00:25:56.520 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:56.520 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:56.520 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:25:56.520 issued rwts: total=22009,11650,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:56.520 latency : target=0, window=0, percentile=100.00%, depth=128 00:25:56.520 00:25:56.520 Run status group 0 (all jobs): 00:25:56.520 READ: bw=171MiB/s (180MB/s), 171MiB/s-171MiB/s (180MB/s-180MB/s), io=344MiB (361MB), run=2008-2008msec 00:25:56.520 WRITE: bw=99.3MiB/s (104MB/s), 99.3MiB/s-99.3MiB/s (104MB/s-104MB/s), io=182MiB (191MB), run=1833-1833msec 00:25:56.520 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:56.779 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:25:56.779 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:25:56.779 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:25:56.779 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:25:56.779 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:56.779 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@99 -- # sync 00:25:56.779 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:56.779 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # set +e 00:25:56.779 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:56.779 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:56.779 rmmod nvme_tcp 00:25:56.779 rmmod nvme_fabrics 00:25:56.779 rmmod nvme_keyring 00:25:56.779 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:56.779 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # set -e 00:25:56.779 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # return 0 00:25:56.779 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # '[' -n 1791189 ']' 00:25:56.779 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@337 -- # killprocess 1791189 00:25:56.779 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 1791189 ']' 00:25:56.779 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 1791189 00:25:56.779 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:25:56.779 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:56.779 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1791189 00:25:57.039 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:57.039 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:57.039 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1791189' 00:25:57.039 killing process with pid 1791189 00:25:57.039 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 1791189 00:25:57.039 08:23:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 1791189 00:25:57.039 08:23:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:57.039 08:23:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # nvmf_fini 00:25:57.039 08:23:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@254 -- # local dev 00:25:57.039 08:23:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@257 -- # remove_target_ns 00:25:57.039 08:23:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:57.039 08:23:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:57.039 08:23:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@258 -- # delete_main_bridge 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@121 -- # return 0 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # _dev=0 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # dev_map=() 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@274 -- # iptr 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # iptables-save 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@548 -- # iptables-restore 00:25:59.608 00:25:59.608 real 0m16.454s 00:25:59.608 user 0m48.546s 00:25:59.608 sys 0m6.604s 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.608 ************************************ 00:25:59.608 END TEST nvmf_fio_host 00:25:59.608 ************************************ 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:59.608 ************************************ 00:25:59.608 START TEST nvmf_failover 00:25:59.608 ************************************ 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:25:59.608 * Looking for test storage... 00:25:59.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:59.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.608 --rc genhtml_branch_coverage=1 00:25:59.608 --rc genhtml_function_coverage=1 00:25:59.608 --rc genhtml_legend=1 00:25:59.608 --rc geninfo_all_blocks=1 00:25:59.608 --rc geninfo_unexecuted_blocks=1 00:25:59.608 00:25:59.608 ' 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:59.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.608 --rc genhtml_branch_coverage=1 00:25:59.608 --rc genhtml_function_coverage=1 00:25:59.608 --rc genhtml_legend=1 00:25:59.608 --rc geninfo_all_blocks=1 00:25:59.608 --rc geninfo_unexecuted_blocks=1 00:25:59.608 00:25:59.608 ' 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:59.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.608 --rc genhtml_branch_coverage=1 00:25:59.608 --rc genhtml_function_coverage=1 00:25:59.608 --rc genhtml_legend=1 00:25:59.608 --rc geninfo_all_blocks=1 00:25:59.608 --rc geninfo_unexecuted_blocks=1 00:25:59.608 00:25:59.608 ' 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:59.608 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:59.608 --rc genhtml_branch_coverage=1 00:25:59.608 --rc genhtml_function_coverage=1 00:25:59.608 --rc genhtml_legend=1 00:25:59.608 --rc geninfo_all_blocks=1 00:25:59.608 --rc geninfo_unexecuted_blocks=1 00:25:59.608 00:25:59.608 ' 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:25:59.608 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@50 -- # : 0 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:59.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # remove_target_ns 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # xtrace_disable 00:25:59.609 08:23:13 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:06.187 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:06.187 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@131 -- # pci_devs=() 00:26:06.187 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:06.187 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@135 -- # net_devs=() 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@136 -- # e810=() 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@136 -- # local -ga e810 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@137 -- # x722=() 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@137 -- # local -ga x722 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@138 -- # mlx=() 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@138 -- # local -ga mlx 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:06.188 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:06.188 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:06.188 Found net devices under 0000:86:00.0: cvl_0_0 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:06.188 Found net devices under 0000:86:00.1: cvl_0_1 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # is_hw=yes 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@247 -- # create_target_ns 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@28 -- # local -g _dev 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # ips=() 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:26:06.188 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772161 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:06.189 10.0.0.1 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772162 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:06.189 10.0.0.2 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:06.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:06.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.467 ms 00:26:06.189 00:26:06.189 --- 10.0.0.1 ping statistics --- 00:26:06.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.189 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target0 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:26:06.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:06.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:26:06.189 00:26:06.189 --- 10.0.0.2 ping statistics --- 00:26:06.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:06.189 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # return 0 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:26:06.189 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # return 1 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev= 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@160 -- # return 0 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target0 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # local dev=target1 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # return 1 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@159 -- # dev= 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@160 -- # return 0 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:26:06.190 ' 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # nvmfpid=1796242 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # waitforlisten 1796242 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1796242 ']' 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:06.190 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:06.190 [2024-11-20 08:23:19.599384] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:26:06.190 [2024-11-20 08:23:19.599430] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:06.190 [2024-11-20 08:23:19.677944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:06.190 [2024-11-20 08:23:19.719985] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:06.190 [2024-11-20 08:23:19.720019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:06.190 [2024-11-20 08:23:19.720026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:06.190 [2024-11-20 08:23:19.720032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:06.190 [2024-11-20 08:23:19.720038] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:06.190 [2024-11-20 08:23:19.721489] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:06.190 [2024-11-20 08:23:19.721596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:06.190 [2024-11-20 08:23:19.721597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:06.190 08:23:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:26:06.190 [2024-11-20 08:23:20.032794] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:06.190 08:23:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:26:06.449 Malloc0 00:26:06.449 08:23:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:06.709 08:23:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:06.709 08:23:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:06.968 [2024-11-20 08:23:20.874341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:06.968 08:23:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:07.227 [2024-11-20 08:23:21.086877] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:07.227 08:23:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:07.487 [2024-11-20 08:23:21.287531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:07.487 08:23:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1796618 00:26:07.487 08:23:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:26:07.487 08:23:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:07.487 08:23:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1796618 /var/tmp/bdevperf.sock 00:26:07.487 08:23:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1796618 ']' 00:26:07.487 08:23:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:07.487 08:23:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:07.487 08:23:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:07.487 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:07.487 08:23:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:07.487 08:23:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:07.746 08:23:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:07.746 08:23:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:07.746 08:23:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:08.004 NVMe0n1 00:26:08.004 08:23:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:08.571 00:26:08.571 08:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1796727 00:26:08.571 08:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:08.571 08:23:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:26:09.509 08:23:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:09.767 [2024-11-20 08:23:23.566949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.767 [2024-11-20 08:23:23.566995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.767 [2024-11-20 08:23:23.567002] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.767 [2024-11-20 08:23:23.567009] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.767 [2024-11-20 08:23:23.567015] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.767 [2024-11-20 08:23:23.567022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.767 [2024-11-20 08:23:23.567028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.767 [2024-11-20 08:23:23.567034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.767 [2024-11-20 08:23:23.567040] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.767 [2024-11-20 08:23:23.567045] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.767 [2024-11-20 08:23:23.567051] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.767 [2024-11-20 08:23:23.567057] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.767 [2024-11-20 08:23:23.567063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.767 [2024-11-20 08:23:23.567069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.767 [2024-11-20 08:23:23.567075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.767 [2024-11-20 08:23:23.567080] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.767 [2024-11-20 08:23:23.567086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.767 [2024-11-20 08:23:23.567092] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.768 [2024-11-20 08:23:23.567098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.768 [2024-11-20 08:23:23.567104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.768 [2024-11-20 08:23:23.567111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d12d0 is same with the state(6) to be set 00:26:09.768 08:23:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:26:13.051 08:23:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:13.051 00:26:13.051 08:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:13.310 [2024-11-20 08:23:27.211589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.310 [2024-11-20 08:23:27.211625] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.310 [2024-11-20 08:23:27.211637] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211680] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211686] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211692] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211726] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211732] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211765] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211771] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211777] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211909] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211920] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211926] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211961] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211967] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211979] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 [2024-11-20 08:23:27.211985] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23d2060 is same with the state(6) to be set 00:26:13.311 08:23:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:26:16.598 08:23:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:16.598 [2024-11-20 08:23:30.422227] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:16.598 08:23:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:26:17.535 08:23:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:17.793 08:23:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 1796727 00:26:24.375 { 00:26:24.375 "results": [ 00:26:24.375 { 00:26:24.375 "job": "NVMe0n1", 00:26:24.375 "core_mask": "0x1", 00:26:24.375 "workload": "verify", 00:26:24.375 "status": "finished", 00:26:24.375 "verify_range": { 00:26:24.375 "start": 0, 00:26:24.375 "length": 16384 00:26:24.375 }, 00:26:24.375 "queue_depth": 128, 00:26:24.375 "io_size": 4096, 00:26:24.375 "runtime": 15.006858, 00:26:24.375 "iops": 11259.585450865197, 00:26:24.375 "mibps": 43.98275566744218, 00:26:24.375 "io_failed": 9789, 00:26:24.375 "io_timeout": 0, 00:26:24.375 "avg_latency_us": 10724.295431171351, 00:26:24.375 "min_latency_us": 409.6, 00:26:24.375 "max_latency_us": 19723.21523809524 00:26:24.375 } 00:26:24.375 ], 00:26:24.375 "core_count": 1 00:26:24.375 } 00:26:24.375 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 1796618 00:26:24.375 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1796618 ']' 00:26:24.375 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1796618 00:26:24.375 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:24.375 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:24.375 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1796618 00:26:24.375 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:24.375 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:24.375 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1796618' 00:26:24.375 killing process with pid 1796618 00:26:24.375 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1796618 00:26:24.375 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1796618 00:26:24.375 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:24.375 [2024-11-20 08:23:21.363985] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:26:24.375 [2024-11-20 08:23:21.364039] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1796618 ] 00:26:24.375 [2024-11-20 08:23:21.439103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.375 [2024-11-20 08:23:21.480511] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:24.375 Running I/O for 15 seconds... 00:26:24.375 11263.00 IOPS, 44.00 MiB/s [2024-11-20T07:23:38.403Z] [2024-11-20 08:23:23.567546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:98496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.567582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.567604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.567616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.567630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:98512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.567640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.567654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:98520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.567665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.567678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:98528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.567688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.567701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:99368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.375 [2024-11-20 08:23:23.567711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.567724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.567736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.567748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:98544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.567759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.567771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:98552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.567782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.567794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:98560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.567804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.567817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:98568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.567829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.567847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:98576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.567858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.567870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:98584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.567881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.567892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.567902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.567914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.567924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.567935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:98608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.567945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.567957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:98616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.567967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.567979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.567990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.568002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:98632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.568012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.568024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:98640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.568034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.568047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:98648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.568056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.568067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:98656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.375 [2024-11-20 08:23:23.568078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.375 [2024-11-20 08:23:23.568091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:98664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:98672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:98680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:98688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:98704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:98712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:98720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:98728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:98736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:98744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:98752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:98760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:98776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:98784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:98792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:98800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:99376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.376 [2024-11-20 08:23:23.568673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.568981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.568991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.376 [2024-11-20 08:23:23.569003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.376 [2024-11-20 08:23:23.569015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.377 [2024-11-20 08:23:23.569914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.377 [2024-11-20 08:23:23.569925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.378 [2024-11-20 08:23:23.569936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.569948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.378 [2024-11-20 08:23:23.569958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.569971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.378 [2024-11-20 08:23:23.569982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.569994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.378 [2024-11-20 08:23:23.570004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.378 [2024-11-20 08:23:23.570026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.378 [2024-11-20 08:23:23.570048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.378 [2024-11-20 08:23:23.570070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.378 [2024-11-20 08:23:23.570093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.378 [2024-11-20 08:23:23.570115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.378 [2024-11-20 08:23:23.570137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:99384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.378 [2024-11-20 08:23:23.570161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.378 [2024-11-20 08:23:23.570184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.378 [2024-11-20 08:23:23.570211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:99408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.378 [2024-11-20 08:23:23.570236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:99416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.378 [2024-11-20 08:23:23.570258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:99424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.378 [2024-11-20 08:23:23.570280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.378 [2024-11-20 08:23:23.570303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:99440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.378 [2024-11-20 08:23:23.570326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:99448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.378 [2024-11-20 08:23:23.570348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:99456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.378 [2024-11-20 08:23:23.570369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.378 [2024-11-20 08:23:23.570392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.378 [2024-11-20 08:23:23.570414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:99480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.378 [2024-11-20 08:23:23.570436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:99488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.378 [2024-11-20 08:23:23.570460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:99496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.378 [2024-11-20 08:23:23.570482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:99504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.378 [2024-11-20 08:23:23.570504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570531] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.378 [2024-11-20 08:23:23.570541] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.378 [2024-11-20 08:23:23.570551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99512 len:8 PRP1 0x0 PRP2 0x0 00:26:24.378 [2024-11-20 08:23:23.570562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570618] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:24.378 [2024-11-20 08:23:23.570648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.378 [2024-11-20 08:23:23.570661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.378 [2024-11-20 08:23:23.570683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570695] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.378 [2024-11-20 08:23:23.570705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570716] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.378 [2024-11-20 08:23:23.570726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:23.570737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:26:24.378 [2024-11-20 08:23:23.573883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:26:24.378 [2024-11-20 08:23:23.573918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1add340 (9): Bad file descriptor 00:26:24.378 [2024-11-20 08:23:23.716622] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:26:24.378 10507.50 IOPS, 41.04 MiB/s [2024-11-20T07:23:38.406Z] 10824.00 IOPS, 42.28 MiB/s [2024-11-20T07:23:38.406Z] 11014.25 IOPS, 43.02 MiB/s [2024-11-20T07:23:38.406Z] [2024-11-20 08:23:27.213550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:82320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.378 [2024-11-20 08:23:27.213587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:27.213607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:82328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.378 [2024-11-20 08:23:27.213618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:27.213636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:82336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.378 [2024-11-20 08:23:27.213647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:27.213659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:82344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.378 [2024-11-20 08:23:27.213669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:27.213681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:82352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.378 [2024-11-20 08:23:27.213692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.378 [2024-11-20 08:23:27.213704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:82360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.379 [2024-11-20 08:23:27.213714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.213726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:82368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.379 [2024-11-20 08:23:27.213736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.213748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:82376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.379 [2024-11-20 08:23:27.213757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.213767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:82384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.379 [2024-11-20 08:23:27.213777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.213789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:82392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.379 [2024-11-20 08:23:27.213799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.213811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:82400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.379 [2024-11-20 08:23:27.213821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.213833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:82408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.379 [2024-11-20 08:23:27.213844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.213855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:82416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.379 [2024-11-20 08:23:27.213866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.213878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:82424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.379 [2024-11-20 08:23:27.213888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.213901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:82432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.379 [2024-11-20 08:23:27.213914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.213926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:82440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.379 [2024-11-20 08:23:27.213936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.213948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:82448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.379 [2024-11-20 08:23:27.213958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.213970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:82456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.379 [2024-11-20 08:23:27.213980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.213992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:82464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.379 [2024-11-20 08:23:27.214002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.214014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:82472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.379 [2024-11-20 08:23:27.214025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.214037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:82480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.379 [2024-11-20 08:23:27.214047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.214059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:82488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.379 [2024-11-20 08:23:27.214069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.214081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:82496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.379 [2024-11-20 08:23:27.214091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.214106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:82504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.379 [2024-11-20 08:23:27.214116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.214128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:82656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.379 [2024-11-20 08:23:27.214138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.214150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:82664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.379 [2024-11-20 08:23:27.214162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.214174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:82672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.379 [2024-11-20 08:23:27.214184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.214199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:82680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.379 [2024-11-20 08:23:27.214215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.214228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:82688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.379 [2024-11-20 08:23:27.214238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.214250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:82696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.379 [2024-11-20 08:23:27.214260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.214271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:82704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.379 [2024-11-20 08:23:27.214283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.214294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:82712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.379 [2024-11-20 08:23:27.214304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.214316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:82720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.379 [2024-11-20 08:23:27.214327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.214339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:82728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.379 [2024-11-20 08:23:27.214350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.214361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:82736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.379 [2024-11-20 08:23:27.214372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.214384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:82744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.379 [2024-11-20 08:23:27.214394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.214406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:82752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.379 [2024-11-20 08:23:27.214416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.214428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:82760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.379 [2024-11-20 08:23:27.214439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.379 [2024-11-20 08:23:27.214451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.379 [2024-11-20 08:23:27.214462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:82776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:82784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:82792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:82800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:82808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:82816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:82824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:82832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:82840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:82848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:82856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:82864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:82872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:82880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:82888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:82896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:82904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:82912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:82920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:82928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:82936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:82944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:82952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.214987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:82960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.214998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.215009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:82968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.215019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.215031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:82976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.215041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.215054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:82984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.215065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.215076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:82992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.215087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.215098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:83000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.215109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.215120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:83008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.215130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.215142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:83016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.215152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.215164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:83024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.215175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.215187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:83032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.215197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.215214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:83040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.215225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.215237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:83048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.215247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.215258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:83056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.215268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.215280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:83064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.215290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.215302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:83072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.215311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.215323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:83080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.380 [2024-11-20 08:23:27.215335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.380 [2024-11-20 08:23:27.215347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:83088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.381 [2024-11-20 08:23:27.215357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.215369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:83096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.381 [2024-11-20 08:23:27.215381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.215393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:83104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.381 [2024-11-20 08:23:27.215403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.215415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:83112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.381 [2024-11-20 08:23:27.215425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.215436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:83120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.381 [2024-11-20 08:23:27.215447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.215458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:83128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.381 [2024-11-20 08:23:27.215469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.215481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:83136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.381 [2024-11-20 08:23:27.215491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.215503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:83144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.381 [2024-11-20 08:23:27.215512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.215524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:83152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.381 [2024-11-20 08:23:27.215534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.215546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:82512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.381 [2024-11-20 08:23:27.215556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.215569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:82520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.381 [2024-11-20 08:23:27.215578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.215607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.381 [2024-11-20 08:23:27.215618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83160 len:8 PRP1 0x0 PRP2 0x0 00:26:24.381 [2024-11-20 08:23:27.215628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.215680] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.381 [2024-11-20 08:23:27.215693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.215704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.381 [2024-11-20 08:23:27.215714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.215725] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.381 [2024-11-20 08:23:27.215735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.215746] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.381 [2024-11-20 08:23:27.215756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.215765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1add340 is same with the state(6) to be set 00:26:24.381 [2024-11-20 08:23:27.215938] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.381 [2024-11-20 08:23:27.215950] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.381 [2024-11-20 08:23:27.215959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83168 len:8 PRP1 0x0 PRP2 0x0 00:26:24.381 [2024-11-20 08:23:27.215970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.215983] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.381 [2024-11-20 08:23:27.215991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.381 [2024-11-20 08:23:27.216000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83176 len:8 PRP1 0x0 PRP2 0x0 00:26:24.381 [2024-11-20 08:23:27.216010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.216020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.381 [2024-11-20 08:23:27.216029] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.381 [2024-11-20 08:23:27.216037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83184 len:8 PRP1 0x0 PRP2 0x0 00:26:24.381 [2024-11-20 08:23:27.216047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.216057] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.381 [2024-11-20 08:23:27.216065] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.381 [2024-11-20 08:23:27.216074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83192 len:8 PRP1 0x0 PRP2 0x0 00:26:24.381 [2024-11-20 08:23:27.216084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.216095] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.381 [2024-11-20 08:23:27.216103] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.381 [2024-11-20 08:23:27.216112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83200 len:8 PRP1 0x0 PRP2 0x0 00:26:24.381 [2024-11-20 08:23:27.216122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.216135] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.381 [2024-11-20 08:23:27.216144] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.381 [2024-11-20 08:23:27.216152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83208 len:8 PRP1 0x0 PRP2 0x0 00:26:24.381 [2024-11-20 08:23:27.216163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.216173] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.381 [2024-11-20 08:23:27.216181] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.381 [2024-11-20 08:23:27.216190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83216 len:8 PRP1 0x0 PRP2 0x0 00:26:24.381 [2024-11-20 08:23:27.216200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.216216] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.381 [2024-11-20 08:23:27.216224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.381 [2024-11-20 08:23:27.216233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83224 len:8 PRP1 0x0 PRP2 0x0 00:26:24.381 [2024-11-20 08:23:27.216243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.216253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.381 [2024-11-20 08:23:27.216261] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.381 [2024-11-20 08:23:27.216269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83232 len:8 PRP1 0x0 PRP2 0x0 00:26:24.381 [2024-11-20 08:23:27.216279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.216290] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.381 [2024-11-20 08:23:27.216298] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.381 [2024-11-20 08:23:27.216306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83240 len:8 PRP1 0x0 PRP2 0x0 00:26:24.381 [2024-11-20 08:23:27.216317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.216326] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.381 [2024-11-20 08:23:27.216334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.381 [2024-11-20 08:23:27.216343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83248 len:8 PRP1 0x0 PRP2 0x0 00:26:24.381 [2024-11-20 08:23:27.216353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.216363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.381 [2024-11-20 08:23:27.216371] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.381 [2024-11-20 08:23:27.216380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83256 len:8 PRP1 0x0 PRP2 0x0 00:26:24.381 [2024-11-20 08:23:27.216389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.216401] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.381 [2024-11-20 08:23:27.216409] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.381 [2024-11-20 08:23:27.216418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83264 len:8 PRP1 0x0 PRP2 0x0 00:26:24.381 [2024-11-20 08:23:27.216431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.381 [2024-11-20 08:23:27.216441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.381 [2024-11-20 08:23:27.216449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.381 [2024-11-20 08:23:27.216458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83272 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.216468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.216478] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.216487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.216495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83280 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.216505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.216515] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.216525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.216534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83288 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.216545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.216554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.216563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.216571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83296 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.216581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.216591] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.216600] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.216609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83304 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.216619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.216629] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.216637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.216646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83312 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.216656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.216666] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.216674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.216683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83320 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.216694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.216704] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.216715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.216724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83328 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.216734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.216745] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.216753] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.216762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83336 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.216772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.216783] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.216791] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.216800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82528 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.216811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.216820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.216829] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.216838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82536 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.216848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.216858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.216866] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.216874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82544 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.216885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.216895] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.216903] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.216912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82552 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.216922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.216932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.216941] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.216950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82560 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.216960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.216970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.216978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.216987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82568 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.216997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.217012] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.217020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.217031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82576 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.217041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.217050] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.217059] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.217068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82584 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.217078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.217088] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.217096] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.217105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82592 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.217115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.217125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.217133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.217141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82600 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.217152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.217161] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.217174] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.217183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82608 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.217193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.217209] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.217218] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.217227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82616 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.217237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.217247] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.217256] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.217265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82624 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.217275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.217286] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.217294] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.217302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82632 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.217315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.217330] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.382 [2024-11-20 08:23:27.217338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.382 [2024-11-20 08:23:27.217347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82640 len:8 PRP1 0x0 PRP2 0x0 00:26:24.382 [2024-11-20 08:23:27.217357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.382 [2024-11-20 08:23:27.217367] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.217375] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.217384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82648 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.217394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.217404] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.217413] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.217421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82320 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.217431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.217441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.217449] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.217458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82328 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.217469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.217480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.217488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.217497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82336 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.217507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.217518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.217525] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.217534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82344 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.217544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.217554] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.217562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.217571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82352 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.217581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.217590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.217599] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.217610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82360 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.217620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.217632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.217639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.217648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82368 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.217659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.217670] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.217678] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.217687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82376 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.217697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.217707] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.217715] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.217724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82384 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.217734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.217744] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.217752] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.217761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82392 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.217771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.217781] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.217789] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.217798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82400 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.217808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.217817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.217825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.217835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82408 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.217845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.217856] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.217863] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.217872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82416 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.217882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.217893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.217906] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.217915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82424 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.217925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.217935] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.217943] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.217952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82432 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.217962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.217972] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.217981] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.217990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82440 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.218000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.218010] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.218018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.218026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82448 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.218037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.218047] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.218055] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.218064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82456 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.218073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.218084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.218092] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.218101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82464 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.218111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.218122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.218130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.218138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82472 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.218149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.218159] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.218166] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.218175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82480 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.218185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.218197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.218210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.218219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82488 len:8 PRP1 0x0 PRP2 0x0 00:26:24.383 [2024-11-20 08:23:27.218229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.383 [2024-11-20 08:23:27.218240] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.383 [2024-11-20 08:23:27.218248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.383 [2024-11-20 08:23:27.218257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82496 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.218268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.384 [2024-11-20 08:23:27.218278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.384 [2024-11-20 08:23:27.218286] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.384 [2024-11-20 08:23:27.218295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82504 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.218305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.384 [2024-11-20 08:23:27.218315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.384 [2024-11-20 08:23:27.221649] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.384 [2024-11-20 08:23:27.221668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82656 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.221680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.384 [2024-11-20 08:23:27.221691] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.384 [2024-11-20 08:23:27.221699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.384 [2024-11-20 08:23:27.221708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82664 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.221718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.384 [2024-11-20 08:23:27.221728] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.384 [2024-11-20 08:23:27.221736] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.384 [2024-11-20 08:23:27.221745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82672 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.221755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.384 [2024-11-20 08:23:27.221765] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.384 [2024-11-20 08:23:27.221772] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.384 [2024-11-20 08:23:27.221781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82680 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.221791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.384 [2024-11-20 08:23:27.221801] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.384 [2024-11-20 08:23:27.221808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.384 [2024-11-20 08:23:27.221817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82688 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.221839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.384 [2024-11-20 08:23:27.221850] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.384 [2024-11-20 08:23:27.221858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.384 [2024-11-20 08:23:27.221866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82696 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.221876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.384 [2024-11-20 08:23:27.221886] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.384 [2024-11-20 08:23:27.221895] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.384 [2024-11-20 08:23:27.221903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82704 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.221913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.384 [2024-11-20 08:23:27.221923] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.384 [2024-11-20 08:23:27.221931] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.384 [2024-11-20 08:23:27.221940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82712 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.221950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.384 [2024-11-20 08:23:27.221960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.384 [2024-11-20 08:23:27.221968] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.384 [2024-11-20 08:23:27.221976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82720 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.221986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.384 [2024-11-20 08:23:27.221996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.384 [2024-11-20 08:23:27.222004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.384 [2024-11-20 08:23:27.222013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82728 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.222023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.384 [2024-11-20 08:23:27.222033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.384 [2024-11-20 08:23:27.222041] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.384 [2024-11-20 08:23:27.222049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82736 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.222059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.384 [2024-11-20 08:23:27.222069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.384 [2024-11-20 08:23:27.222077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.384 [2024-11-20 08:23:27.222085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82744 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.222095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.384 [2024-11-20 08:23:27.222105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.384 [2024-11-20 08:23:27.222113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.384 [2024-11-20 08:23:27.222124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82752 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.222134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.384 [2024-11-20 08:23:27.222144] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.384 [2024-11-20 08:23:27.222152] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.384 [2024-11-20 08:23:27.222160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82760 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.222170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.384 [2024-11-20 08:23:27.222180] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.384 [2024-11-20 08:23:27.222188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.384 [2024-11-20 08:23:27.222197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82768 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.222211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.384 [2024-11-20 08:23:27.222221] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.384 [2024-11-20 08:23:27.222229] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.384 [2024-11-20 08:23:27.222238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82776 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.222248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.384 [2024-11-20 08:23:27.222257] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.384 [2024-11-20 08:23:27.222265] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.384 [2024-11-20 08:23:27.222274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82784 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.222284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.384 [2024-11-20 08:23:27.222294] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.384 [2024-11-20 08:23:27.222302] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.384 [2024-11-20 08:23:27.222311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82792 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.222321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.384 [2024-11-20 08:23:27.222331] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.384 [2024-11-20 08:23:27.222338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.384 [2024-11-20 08:23:27.222347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82800 len:8 PRP1 0x0 PRP2 0x0 00:26:24.384 [2024-11-20 08:23:27.222357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.222368] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.222376] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.222385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82808 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.222394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.222406] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.222415] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.222423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82816 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.222433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.222443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.222451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.222459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82824 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.222469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.222480] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.222487] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.222496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82832 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.222506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.222516] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.222524] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.222533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82840 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.222543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.222553] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.222561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.222569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82848 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.222579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.222590] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.222598] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.222606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82856 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.222616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.222627] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.222635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.222643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82864 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.222653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.222663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.222672] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.222680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82872 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.222692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.222702] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.222710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.222718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82880 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.222729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.222739] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.222746] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.222755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82888 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.222765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.222775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.222784] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.222792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82896 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.222802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.222812] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.222820] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.222828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82904 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.222838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.222849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.222856] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.222865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82912 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.222875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.222885] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.222893] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.222901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82920 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.222911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.222921] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.222929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.222938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82928 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.222948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.222958] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.222966] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.222976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82936 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.222986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.222996] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.223004] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.223012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82944 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.223022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.223032] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.223040] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.223049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82952 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.223058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.223069] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.223077] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.223085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82960 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.223095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.223105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.223113] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.223122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82968 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.223133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.223142] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.223151] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.223160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82976 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.223169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.223179] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.385 [2024-11-20 08:23:27.223187] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.385 [2024-11-20 08:23:27.223196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82984 len:8 PRP1 0x0 PRP2 0x0 00:26:24.385 [2024-11-20 08:23:27.223212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.385 [2024-11-20 08:23:27.223222] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223230] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:82992 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.223259] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83000 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.223298] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223306] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83008 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.223334] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223342] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83016 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.223371] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83024 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.223408] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83032 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.223444] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223452] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83040 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.223481] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223489] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83048 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.223518] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223526] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83056 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.223558] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83064 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.223594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223603] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83072 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.223632] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223639] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83080 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.223669] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223676] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83088 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.223705] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223714] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83096 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.223742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223750] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83104 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.223780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223788] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83112 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.223817] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223825] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83120 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.223857] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223864] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83128 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.223894] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223902] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83136 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.223931] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223939] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83144 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.223970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.223978] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.223987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83152 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.223997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.224007] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.224016] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.224025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82512 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.224034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.224045] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.224054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.386 [2024-11-20 08:23:27.224063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:82520 len:8 PRP1 0x0 PRP2 0x0 00:26:24.386 [2024-11-20 08:23:27.224072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.386 [2024-11-20 08:23:27.224083] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.386 [2024-11-20 08:23:27.224091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.387 [2024-11-20 08:23:27.224100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:83160 len:8 PRP1 0x0 PRP2 0x0 00:26:24.387 [2024-11-20 08:23:27.224110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:27.224161] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:26:24.387 [2024-11-20 08:23:27.224177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:26:24.387 [2024-11-20 08:23:27.227304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:26:24.387 [2024-11-20 08:23:27.227340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1add340 (9): Bad file descriptor 00:26:24.387 [2024-11-20 08:23:27.250635] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:26:24.387 11008.40 IOPS, 43.00 MiB/s [2024-11-20T07:23:38.415Z] 11082.33 IOPS, 43.29 MiB/s [2024-11-20T07:23:38.415Z] 11172.43 IOPS, 43.64 MiB/s [2024-11-20T07:23:38.415Z] 11216.00 IOPS, 43.81 MiB/s [2024-11-20T07:23:38.415Z] 11240.22 IOPS, 43.91 MiB/s [2024-11-20T07:23:38.415Z] [2024-11-20 08:23:31.637432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:102888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.387 [2024-11-20 08:23:31.637472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.387 [2024-11-20 08:23:31.637500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:102904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.387 [2024-11-20 08:23:31.637519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:102912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.387 [2024-11-20 08:23:31.637539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:102920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.387 [2024-11-20 08:23:31.637559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:102928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.387 [2024-11-20 08:23:31.637579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:102936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.387 [2024-11-20 08:23:31.637600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:102944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.387 [2024-11-20 08:23:31.637621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.387 [2024-11-20 08:23:31.637642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:102960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.387 [2024-11-20 08:23:31.637664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:102008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.637690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:102016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.637711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:102024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.637730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:102032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.637753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:102040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.637772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.637790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:102056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.637812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:102064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.637833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:102072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.637857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:102080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.637879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:102088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.637901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:102096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.637923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:102104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.637946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:102112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.637972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.637984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:102120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.637994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.638006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:102128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.638016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.638028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.387 [2024-11-20 08:23:31.638038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.638050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:102136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.638060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.638072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:102144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.638082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.638095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:102152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.638105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.638118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:102160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.638128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.638140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:102168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.638151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.638162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:102176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.638173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.638185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:102184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.638195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.638214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:102192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.387 [2024-11-20 08:23:31.638224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.387 [2024-11-20 08:23:31.638236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:102200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:102208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:102216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:102224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:102232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:102240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:102248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:102256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:102272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:102280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:102288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:102296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:102304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:102312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:102320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:102328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:102336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:102368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:102376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:102384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:102392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:102400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:102408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:102416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:102424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:102432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:102440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:102448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:102456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.638982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:102464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.638992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.639005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:102472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.639014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.388 [2024-11-20 08:23:31.639028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:102480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.388 [2024-11-20 08:23:31.639039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:102488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:102496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:102504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:102512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:102520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:102528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:102536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:102544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:102560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:102568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:102576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:102584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:102592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:102600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:102608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:102616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:102624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:102632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:102640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:102648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:102656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:102664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:102672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:102680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:102688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:102696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:102704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:102712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:102720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:102728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:102736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:102744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:102752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:102760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:102768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:102776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:102784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:102792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.389 [2024-11-20 08:23:31.639949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.389 [2024-11-20 08:23:31.639961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:102800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.390 [2024-11-20 08:23:31.639972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.639986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:102808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.390 [2024-11-20 08:23:31.639997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:102816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.390 [2024-11-20 08:23:31.640019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.390 [2024-11-20 08:23:31.640042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.390 [2024-11-20 08:23:31.640066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:102992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.390 [2024-11-20 08:23:31.640090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:103000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.390 [2024-11-20 08:23:31.640113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:103008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.390 [2024-11-20 08:23:31.640135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:103016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.390 [2024-11-20 08:23:31.640157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:103024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:24.390 [2024-11-20 08:23:31.640179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:102824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.390 [2024-11-20 08:23:31.640205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:102832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.390 [2024-11-20 08:23:31.640228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:102840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.390 [2024-11-20 08:23:31.640250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.390 [2024-11-20 08:23:31.640275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:102856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.390 [2024-11-20 08:23:31.640298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:102864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.390 [2024-11-20 08:23:31.640321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:102872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:24.390 [2024-11-20 08:23:31.640344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640370] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:24.390 [2024-11-20 08:23:31.640380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:24.390 [2024-11-20 08:23:31.640389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:102880 len:8 PRP1 0x0 PRP2 0x0 00:26:24.390 [2024-11-20 08:23:31.640400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640452] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:26:24.390 [2024-11-20 08:23:31.640482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.390 [2024-11-20 08:23:31.640497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640508] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.390 [2024-11-20 08:23:31.640519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.390 [2024-11-20 08:23:31.640540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640551] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:24.390 [2024-11-20 08:23:31.640561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:24.390 [2024-11-20 08:23:31.640572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:26:24.390 [2024-11-20 08:23:31.643717] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:26:24.390 [2024-11-20 08:23:31.643756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1add340 (9): Bad file descriptor 00:26:24.390 [2024-11-20 08:23:31.670844] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:26:24.390 11209.10 IOPS, 43.79 MiB/s [2024-11-20T07:23:38.418Z] 11219.91 IOPS, 43.83 MiB/s [2024-11-20T07:23:38.418Z] 11237.83 IOPS, 43.90 MiB/s [2024-11-20T07:23:38.418Z] 11246.62 IOPS, 43.93 MiB/s [2024-11-20T07:23:38.418Z] 11257.00 IOPS, 43.97 MiB/s 00:26:24.390 Latency(us) 00:26:24.390 [2024-11-20T07:23:38.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.390 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:24.390 Verification LBA range: start 0x0 length 0x4000 00:26:24.390 NVMe0n1 : 15.01 11259.59 43.98 652.30 0.00 10724.30 409.60 19723.22 00:26:24.390 [2024-11-20T07:23:38.418Z] =================================================================================================================== 00:26:24.390 [2024-11-20T07:23:38.418Z] Total : 11259.59 43.98 652.30 0.00 10724.30 409.60 19723.22 00:26:24.390 Received shutdown signal, test time was about 15.000000 seconds 00:26:24.390 00:26:24.390 Latency(us) 00:26:24.390 [2024-11-20T07:23:38.418Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.390 [2024-11-20T07:23:38.418Z] =================================================================================================================== 00:26:24.390 [2024-11-20T07:23:38.418Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:24.390 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:26:24.390 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:26:24.390 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:26:24.390 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1799172 00:26:24.390 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:26:24.390 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1799172 /var/tmp/bdevperf.sock 00:26:24.390 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 1799172 ']' 00:26:24.390 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:26:24.390 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.390 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:26:24.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:26:24.390 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.390 08:23:37 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:24.390 08:23:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.390 08:23:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:26:24.390 08:23:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:26:24.390 [2024-11-20 08:23:38.204497] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:24.390 08:23:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:26:24.649 [2024-11-20 08:23:38.405028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:26:24.649 08:23:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:24.908 NVMe0n1 00:26:24.908 08:23:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:25.166 00:26:25.166 08:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:26:25.424 00:26:25.424 08:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:25.424 08:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:26:25.683 08:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:25.941 08:23:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:26:29.232 08:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:29.232 08:23:42 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:26:29.232 08:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:26:29.232 08:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1800090 00:26:29.232 08:23:43 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 1800090 00:26:30.169 { 00:26:30.169 "results": [ 00:26:30.169 { 00:26:30.169 "job": "NVMe0n1", 00:26:30.169 "core_mask": "0x1", 00:26:30.169 "workload": "verify", 00:26:30.169 "status": "finished", 00:26:30.169 "verify_range": { 00:26:30.169 "start": 0, 00:26:30.169 "length": 16384 00:26:30.169 }, 00:26:30.169 "queue_depth": 128, 00:26:30.169 "io_size": 4096, 00:26:30.169 "runtime": 1.007736, 00:26:30.169 "iops": 11525.836131685282, 00:26:30.169 "mibps": 45.022797389395635, 00:26:30.169 "io_failed": 0, 00:26:30.169 "io_timeout": 0, 00:26:30.169 "avg_latency_us": 11062.782058339995, 00:26:30.169 "min_latency_us": 959.6342857142857, 00:26:30.169 "max_latency_us": 13481.691428571428 00:26:30.169 } 00:26:30.169 ], 00:26:30.169 "core_count": 1 00:26:30.169 } 00:26:30.169 08:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:30.169 [2024-11-20 08:23:37.811426] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:26:30.169 [2024-11-20 08:23:37.811483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1799172 ] 00:26:30.169 [2024-11-20 08:23:37.885750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.169 [2024-11-20 08:23:37.923334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:30.169 [2024-11-20 08:23:39.803826] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:26:30.169 [2024-11-20 08:23:39.803872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.169 [2024-11-20 08:23:39.803884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.169 [2024-11-20 08:23:39.803892] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.169 [2024-11-20 08:23:39.803899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.169 [2024-11-20 08:23:39.803906] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.169 [2024-11-20 08:23:39.803913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.169 [2024-11-20 08:23:39.803919] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:30.169 [2024-11-20 08:23:39.803926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:30.169 [2024-11-20 08:23:39.803933] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:26:30.169 [2024-11-20 08:23:39.803960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:26:30.169 [2024-11-20 08:23:39.803974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x85c340 (9): Bad file descriptor 00:26:30.169 [2024-11-20 08:23:39.814513] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:26:30.169 Running I/O for 1 seconds... 00:26:30.169 11486.00 IOPS, 44.87 MiB/s 00:26:30.169 Latency(us) 00:26:30.169 [2024-11-20T07:23:44.197Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:30.169 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:30.169 Verification LBA range: start 0x0 length 0x4000 00:26:30.169 NVMe0n1 : 1.01 11525.84 45.02 0.00 0.00 11062.78 959.63 13481.69 00:26:30.169 [2024-11-20T07:23:44.197Z] =================================================================================================================== 00:26:30.169 [2024-11-20T07:23:44.197Z] Total : 11525.84 45.02 0.00 0.00 11062.78 959.63 13481.69 00:26:30.169 08:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:30.169 08:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:26:30.448 08:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:30.747 08:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:30.747 08:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:26:30.747 08:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:26:31.038 08:23:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:26:34.344 08:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:26:34.344 08:23:47 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:26:34.344 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 1799172 00:26:34.344 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1799172 ']' 00:26:34.344 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1799172 00:26:34.344 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:34.344 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:34.344 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1799172 00:26:34.344 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:34.344 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:34.344 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1799172' 00:26:34.344 killing process with pid 1799172 00:26:34.344 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1799172 00:26:34.344 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1799172 00:26:34.344 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:26:34.344 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:34.603 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:26:34.603 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:26:34.603 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:26:34.603 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:34.603 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@99 -- # sync 00:26:34.603 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:34.603 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # set +e 00:26:34.603 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:34.603 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:34.603 rmmod nvme_tcp 00:26:34.603 rmmod nvme_fabrics 00:26:34.603 rmmod nvme_keyring 00:26:34.603 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:34.603 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # set -e 00:26:34.603 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # return 0 00:26:34.603 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # '[' -n 1796242 ']' 00:26:34.603 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@337 -- # killprocess 1796242 00:26:34.603 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 1796242 ']' 00:26:34.603 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 1796242 00:26:34.603 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:26:34.603 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:34.863 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1796242 00:26:34.863 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:34.863 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:34.863 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1796242' 00:26:34.863 killing process with pid 1796242 00:26:34.863 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 1796242 00:26:34.863 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 1796242 00:26:34.863 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:34.863 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # nvmf_fini 00:26:34.863 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@254 -- # local dev 00:26:34.863 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:34.863 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:34.863 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:34.863 08:23:48 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:37.400 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@121 -- # return 0 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # _dev=0 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # dev_map=() 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@274 -- # iptr 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # iptables-save 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@548 -- # iptables-restore 00:26:37.401 00:26:37.401 real 0m37.757s 00:26:37.401 user 1m58.907s 00:26:37.401 sys 0m8.151s 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:26:37.401 ************************************ 00:26:37.401 END TEST nvmf_failover 00:26:37.401 ************************************ 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:37.401 08:23:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:37.401 ************************************ 00:26:37.401 START TEST nvmf_host_discovery 00:26:37.401 ************************************ 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:37.401 * Looking for test storage... 00:26:37.401 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:37.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.401 --rc genhtml_branch_coverage=1 00:26:37.401 --rc genhtml_function_coverage=1 00:26:37.401 --rc genhtml_legend=1 00:26:37.401 --rc geninfo_all_blocks=1 00:26:37.401 --rc geninfo_unexecuted_blocks=1 00:26:37.401 00:26:37.401 ' 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:37.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.401 --rc genhtml_branch_coverage=1 00:26:37.401 --rc genhtml_function_coverage=1 00:26:37.401 --rc genhtml_legend=1 00:26:37.401 --rc geninfo_all_blocks=1 00:26:37.401 --rc geninfo_unexecuted_blocks=1 00:26:37.401 00:26:37.401 ' 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:37.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.401 --rc genhtml_branch_coverage=1 00:26:37.401 --rc genhtml_function_coverage=1 00:26:37.401 --rc genhtml_legend=1 00:26:37.401 --rc geninfo_all_blocks=1 00:26:37.401 --rc geninfo_unexecuted_blocks=1 00:26:37.401 00:26:37.401 ' 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:37.401 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:37.401 --rc genhtml_branch_coverage=1 00:26:37.401 --rc genhtml_function_coverage=1 00:26:37.401 --rc genhtml_legend=1 00:26:37.401 --rc geninfo_all_blocks=1 00:26:37.401 --rc geninfo_unexecuted_blocks=1 00:26:37.401 00:26:37.401 ' 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:37.401 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@50 -- # : 0 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:37.402 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # xtrace_disable 00:26:37.402 08:23:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@131 -- # pci_devs=() 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@135 -- # net_devs=() 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@136 -- # e810=() 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@136 -- # local -ga e810 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@137 -- # x722=() 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@137 -- # local -ga x722 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@138 -- # mlx=() 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@138 -- # local -ga mlx 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:43.976 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:43.976 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.976 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:43.977 Found net devices under 0000:86:00.0: cvl_0_0 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:43.977 Found net devices under 0000:86:00.1: cvl_0_1 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # is_hw=yes 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@247 -- # create_target_ns 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # ips=() 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:43.977 08:23:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:43.977 10.0.0.1 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:43.977 10.0.0.2 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:26:43.977 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:43.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:43.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.496 ms 00:26:43.978 00:26:43.978 --- 10.0.0.1 ping statistics --- 00:26:43.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.978 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:26:43.978 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:43.978 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:26:43.978 00:26:43.978 --- 10.0.0.2 ping statistics --- 00:26:43.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:43.978 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair++ )) 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # return 0 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator0 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=initiator1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # return 1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev= 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@160 -- # return 0 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target0 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target0 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # get_net_dev target1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # local dev=target1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # return 1 00:26:43.978 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@159 -- # dev= 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@160 -- # return 0 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:26:43.979 ' 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # nvmfpid=1804565 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # waitforlisten 1804565 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1804565 ']' 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:43.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.979 [2024-11-20 08:23:57.364249] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:26:43.979 [2024-11-20 08:23:57.364298] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:43.979 [2024-11-20 08:23:57.442193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.979 [2024-11-20 08:23:57.483314] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:43.979 [2024-11-20 08:23:57.483350] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:43.979 [2024-11-20 08:23:57.483357] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:43.979 [2024-11-20 08:23:57.483364] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:43.979 [2024-11-20 08:23:57.483369] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:43.979 [2024-11-20 08:23:57.483905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.979 [2024-11-20 08:23:57.619341] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.979 [2024-11-20 08:23:57.631521] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.979 null0 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.979 null1 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1804588 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1804588 /tmp/host.sock 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 1804588 ']' 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:43.979 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.979 [2024-11-20 08:23:57.708558] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:26:43.979 [2024-11-20 08:23:57.708597] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1804588 ] 00:26:43.979 [2024-11-20 08:23:57.782361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.979 [2024-11-20 08:23:57.822719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:43.979 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:26:44.239 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.239 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:44.239 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:44.239 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:44.239 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.239 08:23:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:26:44.239 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:44.240 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:44.240 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:44.240 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.240 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.240 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:44.240 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.240 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:26:44.240 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:26:44.240 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.240 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:44.240 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.240 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:44.240 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.240 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:44.240 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.240 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:44.240 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:44.240 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.240 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.240 [2024-11-20 08:23:58.261132] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.499 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.500 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.500 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:44.500 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:44.500 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:44.500 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:44.500 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:44.500 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:44.500 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:44.500 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:44.500 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:44.500 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:44.500 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:44.500 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:44.500 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:44.500 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:26:44.500 08:23:58 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:45.068 [2024-11-20 08:23:58.997353] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:45.068 [2024-11-20 08:23:58.997373] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:45.068 [2024-11-20 08:23:58.997386] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:45.068 [2024-11-20 08:23:59.085638] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:45.327 [2024-11-20 08:23:59.269688] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:45.327 [2024-11-20 08:23:59.270491] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xa85dd0:1 started. 00:26:45.327 [2024-11-20 08:23:59.271866] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:45.327 [2024-11-20 08:23:59.271881] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:45.327 [2024-11-20 08:23:59.276333] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xa85dd0 was disconnected and freed. delete nvme_qpair. 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:45.585 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:45.586 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:45.586 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:45.586 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:45.845 [2024-11-20 08:23:59.672040] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xa92f90:1 started. 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:45.845 [2024-11-20 08:23:59.717709] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xa92f90 was disconnected and freed. delete nvme_qpair. 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.845 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.845 [2024-11-20 08:23:59.761174] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:45.845 [2024-11-20 08:23:59.761430] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:45.846 [2024-11-20 08:23:59.761448] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:45.846 [2024-11-20 08:23:59.847697] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:45.846 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:46.105 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.105 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:46.105 08:23:59 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:46.105 [2024-11-20 08:24:00.073739] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:46.106 [2024-11-20 08:24:00.073781] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:46.106 [2024-11-20 08:24:00.073789] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:46.106 [2024-11-20 08:24:00.073795] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.043 08:24:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.043 [2024-11-20 08:24:01.005555] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:47.043 [2024-11-20 08:24:01.005579] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:47.043 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.043 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:47.043 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:47.043 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:47.043 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:47.043 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:47.043 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:47.043 [2024-11-20 08:24:01.014141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.043 [2024-11-20 08:24:01.014160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.043 [2024-11-20 08:24:01.014170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.043 [2024-11-20 08:24:01.014177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.044 [2024-11-20 08:24:01.014190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.044 [2024-11-20 08:24:01.014196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.044 [2024-11-20 08:24:01.014209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:47.044 [2024-11-20 08:24:01.014216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:47.044 [2024-11-20 08:24:01.014222] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56390 is same with the state(6) to be set 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:47.044 [2024-11-20 08:24:01.024152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa56390 (9): Bad file descriptor 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.044 [2024-11-20 08:24:01.034187] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:47.044 [2024-11-20 08:24:01.034199] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:47.044 [2024-11-20 08:24:01.034212] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:47.044 [2024-11-20 08:24:01.034217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:47.044 [2024-11-20 08:24:01.034237] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:47.044 [2024-11-20 08:24:01.034451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.044 [2024-11-20 08:24:01.034467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa56390 with addr=10.0.0.2, port=4420 00:26:47.044 [2024-11-20 08:24:01.034476] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56390 is same with the state(6) to be set 00:26:47.044 [2024-11-20 08:24:01.034488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa56390 (9): Bad file descriptor 00:26:47.044 [2024-11-20 08:24:01.034500] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:47.044 [2024-11-20 08:24:01.034507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:47.044 [2024-11-20 08:24:01.034515] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:47.044 [2024-11-20 08:24:01.034521] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:47.044 [2024-11-20 08:24:01.034526] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:47.044 [2024-11-20 08:24:01.034531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:47.044 [2024-11-20 08:24:01.044269] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:47.044 [2024-11-20 08:24:01.044279] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:47.044 [2024-11-20 08:24:01.044287] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:47.044 [2024-11-20 08:24:01.044292] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:47.044 [2024-11-20 08:24:01.044306] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:47.044 [2024-11-20 08:24:01.044413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.044 [2024-11-20 08:24:01.044425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa56390 with addr=10.0.0.2, port=4420 00:26:47.044 [2024-11-20 08:24:01.044433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56390 is same with the state(6) to be set 00:26:47.044 [2024-11-20 08:24:01.044443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa56390 (9): Bad file descriptor 00:26:47.044 [2024-11-20 08:24:01.044453] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:47.044 [2024-11-20 08:24:01.044460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:47.044 [2024-11-20 08:24:01.044466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:47.044 [2024-11-20 08:24:01.044472] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:47.044 [2024-11-20 08:24:01.044476] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:47.044 [2024-11-20 08:24:01.044481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:47.044 [2024-11-20 08:24:01.054336] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:47.044 [2024-11-20 08:24:01.054351] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:47.044 [2024-11-20 08:24:01.054355] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:47.044 [2024-11-20 08:24:01.054359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:47.044 [2024-11-20 08:24:01.054374] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:47.044 [2024-11-20 08:24:01.054485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.044 [2024-11-20 08:24:01.054498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa56390 with addr=10.0.0.2, port=4420 00:26:47.044 [2024-11-20 08:24:01.054506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56390 is same with the state(6) to be set 00:26:47.044 [2024-11-20 08:24:01.054516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa56390 (9): Bad file descriptor 00:26:47.044 [2024-11-20 08:24:01.054527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:47.044 [2024-11-20 08:24:01.054534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:47.044 [2024-11-20 08:24:01.054541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:47.044 [2024-11-20 08:24:01.054547] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:47.044 [2024-11-20 08:24:01.054552] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:47.044 [2024-11-20 08:24:01.054556] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.044 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:47.044 [2024-11-20 08:24:01.064405] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:47.044 [2024-11-20 08:24:01.064417] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:47.044 [2024-11-20 08:24:01.064422] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:47.044 [2024-11-20 08:24:01.064427] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:47.044 [2024-11-20 08:24:01.064441] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:47.045 [2024-11-20 08:24:01.064533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.045 [2024-11-20 08:24:01.064545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa56390 with addr=10.0.0.2, port=4420 00:26:47.045 [2024-11-20 08:24:01.064553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56390 is same with the state(6) to be set 00:26:47.045 [2024-11-20 08:24:01.064564] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa56390 (9): Bad file descriptor 00:26:47.045 [2024-11-20 08:24:01.064575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:47.045 [2024-11-20 08:24:01.064582] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:47.045 [2024-11-20 08:24:01.064590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:47.045 [2024-11-20 08:24:01.064597] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:47.045 [2024-11-20 08:24:01.064601] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:47.045 [2024-11-20 08:24:01.064605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:47.305 [2024-11-20 08:24:01.074472] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:47.305 [2024-11-20 08:24:01.074486] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:47.305 [2024-11-20 08:24:01.074491] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:47.305 [2024-11-20 08:24:01.074495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:47.305 [2024-11-20 08:24:01.074514] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:47.305 [2024-11-20 08:24:01.074689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.305 [2024-11-20 08:24:01.074703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa56390 with addr=10.0.0.2, port=4420 00:26:47.305 [2024-11-20 08:24:01.074711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56390 is same with the state(6) to be set 00:26:47.305 [2024-11-20 08:24:01.074722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa56390 (9): Bad file descriptor 00:26:47.305 [2024-11-20 08:24:01.074732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:47.305 [2024-11-20 08:24:01.074739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:47.305 [2024-11-20 08:24:01.074745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:47.305 [2024-11-20 08:24:01.074751] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:47.305 [2024-11-20 08:24:01.074755] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:47.305 [2024-11-20 08:24:01.074759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:47.305 [2024-11-20 08:24:01.084546] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:47.305 [2024-11-20 08:24:01.084556] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:47.305 [2024-11-20 08:24:01.084560] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:47.305 [2024-11-20 08:24:01.084564] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:47.305 [2024-11-20 08:24:01.084579] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:47.305 [2024-11-20 08:24:01.084672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:47.305 [2024-11-20 08:24:01.084686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa56390 with addr=10.0.0.2, port=4420 00:26:47.305 [2024-11-20 08:24:01.084694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa56390 is same with the state(6) to be set 00:26:47.305 [2024-11-20 08:24:01.084705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xa56390 (9): Bad file descriptor 00:26:47.305 [2024-11-20 08:24:01.084715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:47.305 [2024-11-20 08:24:01.084721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:47.305 [2024-11-20 08:24:01.084728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:47.305 [2024-11-20 08:24:01.084734] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:47.305 [2024-11-20 08:24:01.084738] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:47.305 [2024-11-20 08:24:01.084742] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:47.305 [2024-11-20 08:24:01.092853] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:47.305 [2024-11-20 08:24:01.092869] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:26:47.305 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.306 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:47.565 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.565 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:26:47.565 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:26:47.565 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:47.565 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:47.565 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:47.565 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.565 08:24:01 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.501 [2024-11-20 08:24:02.426686] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:48.501 [2024-11-20 08:24:02.426704] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:48.501 [2024-11-20 08:24:02.426716] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:48.501 [2024-11-20 08:24:02.512977] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:48.761 [2024-11-20 08:24:02.611681] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:48.761 [2024-11-20 08:24:02.612306] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xa55a10:1 started. 00:26:48.761 [2024-11-20 08:24:02.613914] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:48.761 [2024-11-20 08:24:02.613941] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.761 [2024-11-20 08:24:02.615311] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xa55a10 was disconnected and freed. delete nvme_qpair. 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.761 request: 00:26:48.761 { 00:26:48.761 "name": "nvme", 00:26:48.761 "trtype": "tcp", 00:26:48.761 "traddr": "10.0.0.2", 00:26:48.761 "adrfam": "ipv4", 00:26:48.761 "trsvcid": "8009", 00:26:48.761 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:48.761 "wait_for_attach": true, 00:26:48.761 "method": "bdev_nvme_start_discovery", 00:26:48.761 "req_id": 1 00:26:48.761 } 00:26:48.761 Got JSON-RPC error response 00:26:48.761 response: 00:26:48.761 { 00:26:48.761 "code": -17, 00:26:48.761 "message": "File exists" 00:26:48.761 } 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.761 request: 00:26:48.761 { 00:26:48.761 "name": "nvme_second", 00:26:48.761 "trtype": "tcp", 00:26:48.761 "traddr": "10.0.0.2", 00:26:48.761 "adrfam": "ipv4", 00:26:48.761 "trsvcid": "8009", 00:26:48.761 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:48.761 "wait_for_attach": true, 00:26:48.761 "method": "bdev_nvme_start_discovery", 00:26:48.761 "req_id": 1 00:26:48.761 } 00:26:48.761 Got JSON-RPC error response 00:26:48.761 response: 00:26:48.761 { 00:26:48.761 "code": -17, 00:26:48.761 "message": "File exists" 00:26:48.761 } 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:48.761 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.021 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:26:49.021 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:26:49.021 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:26:49.021 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:49.021 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:26:49.021 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.021 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:26:49.021 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.021 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:49.021 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:49.021 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:49.021 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:49.021 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:49.021 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:49.021 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:49.021 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:49.021 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:49.021 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:49.021 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:49.021 08:24:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:49.959 [2024-11-20 08:24:03.857607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:49.959 [2024-11-20 08:24:03.857636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa537f0 with addr=10.0.0.2, port=8010 00:26:49.959 [2024-11-20 08:24:03.857649] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:49.959 [2024-11-20 08:24:03.857657] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:49.959 [2024-11-20 08:24:03.857663] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:50.897 [2024-11-20 08:24:04.860141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:50.897 [2024-11-20 08:24:04.860167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa537f0 with addr=10.0.0.2, port=8010 00:26:50.897 [2024-11-20 08:24:04.860179] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:50.897 [2024-11-20 08:24:04.860185] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:50.897 [2024-11-20 08:24:04.860191] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:52.277 [2024-11-20 08:24:05.862320] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:52.277 request: 00:26:52.277 { 00:26:52.277 "name": "nvme_second", 00:26:52.277 "trtype": "tcp", 00:26:52.277 "traddr": "10.0.0.2", 00:26:52.277 "adrfam": "ipv4", 00:26:52.277 "trsvcid": "8010", 00:26:52.277 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:52.277 "wait_for_attach": false, 00:26:52.277 "attach_timeout_ms": 3000, 00:26:52.277 "method": "bdev_nvme_start_discovery", 00:26:52.277 "req_id": 1 00:26:52.277 } 00:26:52.277 Got JSON-RPC error response 00:26:52.277 response: 00:26:52.277 { 00:26:52.277 "code": -110, 00:26:52.277 "message": "Connection timed out" 00:26:52.277 } 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1804588 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@99 -- # sync 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@102 -- # set +e 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:52.277 rmmod nvme_tcp 00:26:52.277 rmmod nvme_fabrics 00:26:52.277 rmmod nvme_keyring 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@106 -- # set -e 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@107 -- # return 0 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # '[' -n 1804565 ']' 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@337 -- # killprocess 1804565 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 1804565 ']' 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 1804565 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:52.277 08:24:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1804565 00:26:52.277 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:52.277 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:52.277 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1804565' 00:26:52.277 killing process with pid 1804565 00:26:52.277 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 1804565 00:26:52.277 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 1804565 00:26:52.277 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:52.277 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:26:52.277 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@254 -- # local dev 00:26:52.277 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@257 -- # remove_target_ns 00:26:52.277 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:52.277 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:52.277 08:24:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@258 -- # delete_main_bridge 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@121 -- # return 0 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@274 -- # iptr 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # iptables-save 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:26:54.813 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@548 -- # iptables-restore 00:26:54.813 00:26:54.813 real 0m17.260s 00:26:54.814 user 0m20.335s 00:26:54.814 sys 0m5.951s 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.814 ************************************ 00:26:54.814 END TEST nvmf_host_discovery 00:26:54.814 ************************************ 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:54.814 ************************************ 00:26:54.814 START TEST nvmf_host_multipath_status 00:26:54.814 ************************************ 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:26:54.814 * Looking for test storage... 00:26:54.814 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:54.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.814 --rc genhtml_branch_coverage=1 00:26:54.814 --rc genhtml_function_coverage=1 00:26:54.814 --rc genhtml_legend=1 00:26:54.814 --rc geninfo_all_blocks=1 00:26:54.814 --rc geninfo_unexecuted_blocks=1 00:26:54.814 00:26:54.814 ' 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:54.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.814 --rc genhtml_branch_coverage=1 00:26:54.814 --rc genhtml_function_coverage=1 00:26:54.814 --rc genhtml_legend=1 00:26:54.814 --rc geninfo_all_blocks=1 00:26:54.814 --rc geninfo_unexecuted_blocks=1 00:26:54.814 00:26:54.814 ' 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:54.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.814 --rc genhtml_branch_coverage=1 00:26:54.814 --rc genhtml_function_coverage=1 00:26:54.814 --rc genhtml_legend=1 00:26:54.814 --rc geninfo_all_blocks=1 00:26:54.814 --rc geninfo_unexecuted_blocks=1 00:26:54.814 00:26:54.814 ' 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:54.814 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:54.814 --rc genhtml_branch_coverage=1 00:26:54.814 --rc genhtml_function_coverage=1 00:26:54.814 --rc genhtml_legend=1 00:26:54.814 --rc geninfo_all_blocks=1 00:26:54.814 --rc geninfo_unexecuted_blocks=1 00:26:54.814 00:26:54.814 ' 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.814 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@50 -- # : 0 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:54.815 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # remove_target_ns 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # xtrace_disable 00:26:54.815 08:24:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@131 -- # pci_devs=() 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@135 -- # net_devs=() 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@136 -- # e810=() 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@136 -- # local -ga e810 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@137 -- # x722=() 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@137 -- # local -ga x722 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@138 -- # mlx=() 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@138 -- # local -ga mlx 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:01.387 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:01.387 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:01.387 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:01.388 Found net devices under 0000:86:00.0: cvl_0_0 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:01.388 Found net devices under 0000:86:00.1: cvl_0_1 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # is_hw=yes 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@247 -- # create_target_ns 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@28 -- # local -g _dev 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # ips=() 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772161 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:01.388 10.0.0.1 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772162 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:01.388 10.0.0.2 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:01.388 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:01.389 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:01.389 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:27:01.389 00:27:01.389 --- 10.0.0.1 ping statistics --- 00:27:01.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.389 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target0 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:01.389 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:01.389 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.136 ms 00:27:01.389 00:27:01.389 --- 10.0.0.2 ping statistics --- 00:27:01.389 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:01.389 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # return 0 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # return 1 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev= 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@160 -- # return 0 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:01.389 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target0 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # local dev=target1 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # return 1 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@159 -- # dev= 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@160 -- # return 0 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:27:01.390 ' 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # nvmfpid=1810196 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # waitforlisten 1810196 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1810196 ']' 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:01.390 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:01.390 [2024-11-20 08:24:14.682542] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:27:01.390 [2024-11-20 08:24:14.682586] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:01.390 [2024-11-20 08:24:14.759925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:01.390 [2024-11-20 08:24:14.801832] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:01.390 [2024-11-20 08:24:14.801864] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:01.390 [2024-11-20 08:24:14.801872] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:01.390 [2024-11-20 08:24:14.801879] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:01.390 [2024-11-20 08:24:14.801884] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:01.390 [2024-11-20 08:24:14.803015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:01.390 [2024-11-20 08:24:14.803018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1810196 00:27:01.390 08:24:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:01.390 [2024-11-20 08:24:15.098853] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:01.390 08:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:27:01.390 Malloc0 00:27:01.390 08:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:27:01.649 08:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:01.908 08:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:01.908 [2024-11-20 08:24:15.924495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:02.167 08:24:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:02.167 [2024-11-20 08:24:16.112976] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:02.167 08:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:27:02.167 08:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1810450 00:27:02.167 08:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:27:02.167 08:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1810450 /var/tmp/bdevperf.sock 00:27:02.167 08:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 1810450 ']' 00:27:02.167 08:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:02.167 08:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:02.167 08:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:02.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:02.167 08:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:02.167 08:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:02.427 08:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:02.427 08:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:27:02.427 08:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:27:02.686 08:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:02.945 Nvme0n1 00:27:02.945 08:24:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:27:03.513 Nvme0n1 00:27:03.513 08:24:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:27:03.513 08:24:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:27:05.409 08:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:27:05.409 08:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:05.667 08:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:05.926 08:24:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:27:06.861 08:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:27:06.861 08:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:06.861 08:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:06.861 08:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:07.119 08:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.119 08:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:07.119 08:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.119 08:24:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:07.120 08:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:07.120 08:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:07.120 08:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.120 08:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:07.378 08:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.378 08:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:07.378 08:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.378 08:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:07.638 08:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.638 08:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:07.638 08:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.638 08:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:07.898 08:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:07.898 08:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:07.898 08:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:07.898 08:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:08.157 08:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:08.157 08:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:27:08.157 08:24:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:08.157 08:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:08.415 08:24:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:27:09.793 08:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:27:09.793 08:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:09.793 08:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.793 08:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:09.793 08:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:09.793 08:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:09.793 08:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.793 08:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:09.793 08:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:09.793 08:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:09.793 08:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:09.793 08:24:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:10.052 08:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.052 08:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:10.052 08:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.052 08:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:10.311 08:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.311 08:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:10.311 08:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.311 08:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:10.570 08:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.570 08:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:10.570 08:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:10.570 08:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:10.828 08:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:10.828 08:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:27:10.828 08:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:11.086 08:24:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:11.086 08:24:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:27:12.464 08:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:27:12.464 08:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:12.464 08:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.464 08:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:12.464 08:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.464 08:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:12.464 08:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.464 08:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:12.464 08:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:12.464 08:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:12.723 08:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.723 08:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:12.723 08:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.723 08:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:12.723 08:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.723 08:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:12.982 08:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:12.982 08:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:12.982 08:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:12.982 08:24:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:13.240 08:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.240 08:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:13.240 08:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:13.240 08:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:13.499 08:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:13.499 08:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:27:13.499 08:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:13.499 08:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:13.758 08:24:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:27:15.133 08:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:27:15.133 08:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:15.133 08:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.133 08:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:15.133 08:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.133 08:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:15.133 08:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.133 08:24:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:15.133 08:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:15.133 08:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:15.133 08:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:15.133 08:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:15.391 08:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.392 08:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:15.650 08:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.650 08:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:15.650 08:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.650 08:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:15.909 08:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:15.909 08:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:15.909 08:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:15.909 08:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:16.169 08:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:16.169 08:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:27:16.169 08:24:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:16.428 08:24:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:16.428 08:24:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:27:17.805 08:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:27:17.805 08:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:17.805 08:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.805 08:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:17.805 08:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:17.805 08:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:17.805 08:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.805 08:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:17.805 08:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:17.805 08:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:17.805 08:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:17.805 08:24:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:18.065 08:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.065 08:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:18.065 08:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.065 08:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:18.323 08:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:18.323 08:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:18.323 08:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.323 08:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:18.582 08:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:18.582 08:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:18.582 08:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:18.582 08:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:18.841 08:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:18.841 08:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:27:18.841 08:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:27:18.841 08:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:19.099 08:24:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:27:20.036 08:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:27:20.036 08:24:33 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:20.036 08:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.036 08:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:20.295 08:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:20.295 08:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:20.295 08:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.295 08:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:20.553 08:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.553 08:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:20.553 08:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.553 08:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:20.812 08:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:20.812 08:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:20.812 08:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:20.812 08:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:21.070 08:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.070 08:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:27:21.070 08:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.070 08:24:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:21.070 08:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:21.070 08:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:21.070 08:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:21.070 08:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:21.330 08:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:21.330 08:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:27:21.589 08:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:27:21.589 08:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:27:21.848 08:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:22.108 08:24:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:27:23.091 08:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:27:23.091 08:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:23.091 08:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.091 08:24:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:23.091 08:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.091 08:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:23.091 08:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.091 08:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:23.398 08:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.398 08:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:23.398 08:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.398 08:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:23.657 08:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.658 08:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:23.658 08:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.658 08:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:23.917 08:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.917 08:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:23.917 08:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.917 08:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:23.917 08:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:23.917 08:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:23.917 08:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:23.917 08:24:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:24.175 08:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:24.175 08:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:27:24.175 08:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:24.433 08:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:27:24.692 08:24:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:27:25.627 08:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:27:25.627 08:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:27:25.627 08:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.627 08:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:25.886 08:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:25.887 08:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:25.887 08:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:25.887 08:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:26.145 08:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.145 08:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:26.145 08:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.145 08:24:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:26.145 08:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.145 08:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:26.145 08:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.145 08:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:26.404 08:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.404 08:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:26.404 08:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.404 08:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:26.663 08:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.663 08:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:26.663 08:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:26.663 08:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:26.922 08:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:26.922 08:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:27:26.922 08:24:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:27.182 08:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:27:27.182 08:24:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:27:28.559 08:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:27:28.559 08:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:28.559 08:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.559 08:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:28.559 08:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.559 08:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:27:28.559 08:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.559 08:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:28.818 08:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:28.818 08:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:28.818 08:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:28.819 08:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:29.078 08:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.078 08:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:29.078 08:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.078 08:24:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:29.078 08:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.078 08:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:29.078 08:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.078 08:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:29.338 08:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.338 08:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:27:29.338 08:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:29.338 08:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:29.597 08:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:29.597 08:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:27:29.597 08:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:27:29.856 08:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:27:30.116 08:24:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:27:31.054 08:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:27:31.054 08:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:27:31.054 08:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.054 08:24:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:27:31.313 08:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.313 08:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:27:31.313 08:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.313 08:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:27:31.572 08:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:31.572 08:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:27:31.572 08:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.572 08:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:27:31.572 08:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.572 08:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:27:31.572 08:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.572 08:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:27:31.831 08:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:31.831 08:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:27:31.831 08:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:31.831 08:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:27:32.090 08:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:27:32.090 08:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:27:32.090 08:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:27:32.090 08:24:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:27:32.349 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:27:32.349 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1810450 00:27:32.349 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1810450 ']' 00:27:32.349 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1810450 00:27:32.349 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:32.349 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:32.349 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1810450 00:27:32.349 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:27:32.349 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:27:32.349 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1810450' 00:27:32.349 killing process with pid 1810450 00:27:32.349 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1810450 00:27:32.349 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1810450 00:27:32.349 { 00:27:32.349 "results": [ 00:27:32.349 { 00:27:32.349 "job": "Nvme0n1", 00:27:32.349 "core_mask": "0x4", 00:27:32.349 "workload": "verify", 00:27:32.349 "status": "terminated", 00:27:32.349 "verify_range": { 00:27:32.349 "start": 0, 00:27:32.349 "length": 16384 00:27:32.349 }, 00:27:32.349 "queue_depth": 128, 00:27:32.349 "io_size": 4096, 00:27:32.349 "runtime": 28.823287, 00:27:32.349 "iops": 10609.893313000699, 00:27:32.349 "mibps": 41.44489575390898, 00:27:32.349 "io_failed": 0, 00:27:32.349 "io_timeout": 0, 00:27:32.349 "avg_latency_us": 12044.045822612461, 00:27:32.349 "min_latency_us": 522.7276190476191, 00:27:32.349 "max_latency_us": 3083812.083809524 00:27:32.349 } 00:27:32.349 ], 00:27:32.349 "core_count": 1 00:27:32.349 } 00:27:32.633 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1810450 00:27:32.633 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:32.633 [2024-11-20 08:24:16.173188] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:27:32.633 [2024-11-20 08:24:16.173255] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1810450 ] 00:27:32.633 [2024-11-20 08:24:16.250645] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:32.633 [2024-11-20 08:24:16.291232] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:32.633 Running I/O for 90 seconds... 00:27:32.633 11365.00 IOPS, 44.39 MiB/s [2024-11-20T07:24:46.661Z] 11371.50 IOPS, 44.42 MiB/s [2024-11-20T07:24:46.661Z] 11358.00 IOPS, 44.37 MiB/s [2024-11-20T07:24:46.661Z] 11398.00 IOPS, 44.52 MiB/s [2024-11-20T07:24:46.661Z] 11416.20 IOPS, 44.59 MiB/s [2024-11-20T07:24:46.661Z] 11471.67 IOPS, 44.81 MiB/s [2024-11-20T07:24:46.661Z] 11460.00 IOPS, 44.77 MiB/s [2024-11-20T07:24:46.661Z] 11482.12 IOPS, 44.85 MiB/s [2024-11-20T07:24:46.661Z] 11486.67 IOPS, 44.87 MiB/s [2024-11-20T07:24:46.661Z] 11483.90 IOPS, 44.86 MiB/s [2024-11-20T07:24:46.661Z] 11476.82 IOPS, 44.83 MiB/s [2024-11-20T07:24:46.661Z] 11478.75 IOPS, 44.84 MiB/s [2024-11-20T07:24:46.661Z] [2024-11-20 08:24:30.178139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.633 [2024-11-20 08:24:30.178181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:32.633 [2024-11-20 08:24:30.178207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.633 [2024-11-20 08:24:30.178216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:32.633 [2024-11-20 08:24:30.178229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.633 [2024-11-20 08:24:30.178237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:32.633 [2024-11-20 08:24:30.178249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.633 [2024-11-20 08:24:30.178256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:32.633 [2024-11-20 08:24:30.178268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.633 [2024-11-20 08:24:30.178275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:32.633 [2024-11-20 08:24:30.178287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.633 [2024-11-20 08:24:30.178294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:32.633 [2024-11-20 08:24:30.178306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.633 [2024-11-20 08:24:30.178313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:32.633 [2024-11-20 08:24:30.178326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.633 [2024-11-20 08:24:30.178333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.178987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.178999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.179006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.179018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.179025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.179041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.179048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.179061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.179067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.179080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.179086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.179099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.179106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.179119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.179125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.179137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.179144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.179157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.179164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.179176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.179183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.179195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.179207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.179220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.179227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.179240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.179247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.179259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.634 [2024-11-20 08:24:30.179267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:32.634 [2024-11-20 08:24:30.179280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.179789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.179796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.180335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.180351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.180366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.180374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.180386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.180393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.180406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.180413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.180426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.180434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.180446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.180453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.180465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.180473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.180486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.180492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.180505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.180512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.180524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.180534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.180547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.180553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:32.635 [2024-11-20 08:24:30.180565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.635 [2024-11-20 08:24:30.180572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.180591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.180610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.636 [2024-11-20 08:24:30.180629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.636 [2024-11-20 08:24:30.180657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.180676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.180694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.180714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.180732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.180752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.180771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.180791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.180811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.180830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.180848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.180867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.180886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.180905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.180924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.180943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.180963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.180982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.180994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.181001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.181014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.181021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.181033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.181040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.181053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.181060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.181072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.181078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.181090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.181098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.181110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.181116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.181130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.181137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.181150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.181157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.181169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.181175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.181187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.181194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.181211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.181218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.181230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.181237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.181249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.636 [2024-11-20 08:24:30.181257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.181271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.636 [2024-11-20 08:24:30.181278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.181291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.636 [2024-11-20 08:24:30.181297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.181309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.636 [2024-11-20 08:24:30.181316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:32.636 [2024-11-20 08:24:30.181329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.636 [2024-11-20 08:24:30.181336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.181348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.637 [2024-11-20 08:24:30.181354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.181366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.637 [2024-11-20 08:24:30.181372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.181385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.637 [2024-11-20 08:24:30.181392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.181404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.181411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.181423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.181430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.181887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.181900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.181914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.181922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.181934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.181944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.181956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.181963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.181975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.181982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.181995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.637 [2024-11-20 08:24:30.182537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:32.637 [2024-11-20 08:24:30.182549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.182556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.182569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.182576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.182588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.182596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.182608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.182615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.182627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.182634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.182646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.182653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.182669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.182676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.182688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.182695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.182708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.182715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.182726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.182733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.182745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.182752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.182764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.182771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.182783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.182790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.192922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.192935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.192953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.192962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.193568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.193586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.193608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.193618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.193634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.193645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.193662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.193675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.193692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.193702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.193719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.193728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.193745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.193755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.193774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.193783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.193800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.193809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.193826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.193835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.193852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.193861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.193878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.193888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.193905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.193915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.193931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.193941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.193957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.193967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.193983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.193995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.194012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.194022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:32.638 [2024-11-20 08:24:30.194039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.638 [2024-11-20 08:24:30.194048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.639 [2024-11-20 08:24:30.194606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.639 [2024-11-20 08:24:30.194632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.194979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.194996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.195005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.195022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.195031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.195048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.639 [2024-11-20 08:24:30.195058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:32.639 [2024-11-20 08:24:30.195074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.195084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.195109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.195135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.195161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.195187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.195220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.195246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.195272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.195305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.195331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.195357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.195382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.195408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.195434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.195459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.640 [2024-11-20 08:24:30.195485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.640 [2024-11-20 08:24:30.195510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.640 [2024-11-20 08:24:30.195536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.640 [2024-11-20 08:24:30.195561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.640 [2024-11-20 08:24:30.195588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.640 [2024-11-20 08:24:30.195614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.640 [2024-11-20 08:24:30.195642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.195658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.195668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.196544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.196562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.196583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.196593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.196610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.196620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.196636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.196645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.196662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.196672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.196688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.196697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.196714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.196723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.196740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.196749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.196765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.196774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.196791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.196800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.196819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.196828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.196845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.196854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.196870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.196880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.196896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.196905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.196922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.640 [2024-11-20 08:24:30.196931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:32.640 [2024-11-20 08:24:30.196948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.196957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.196974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.196983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.197809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.197819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.198394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.198412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.198435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.198447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.198465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.198479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.198498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.198512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.198532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.198544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:32.641 [2024-11-20 08:24:30.198565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.641 [2024-11-20 08:24:30.198579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.198599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.198613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.198632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.198645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.198665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.198679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.198699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.198711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.198731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.198745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.198768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.198780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.198802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.198812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.198831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.198842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.198863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.198876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.198896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.198909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.198931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.198944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.198969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.198980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.199001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.199015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.199038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.199051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.199071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.199082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.199101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.199116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.199136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.199148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.199168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.199182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.199198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.199216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.199237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.199247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.199268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.199278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.199298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.199309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.199328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.199340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.199359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.199374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.199393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.199406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.199426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.199440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.199459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.199470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.199491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.199504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.199525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.205242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.205265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.205280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.205302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.205312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.205333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.205348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.205370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.642 [2024-11-20 08:24:30.205385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.205405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.642 [2024-11-20 08:24:30.205418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:32.642 [2024-11-20 08:24:30.205439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.643 [2024-11-20 08:24:30.205452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.205472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.205486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.205510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.205523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.205541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.205553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.205572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.205587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.205612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.205627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.205648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.205660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.205681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.205698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.205718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.205731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.205754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.205770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.205789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.205802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.205824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.205840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.205862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.205874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.205893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.205905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.205923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.205934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.205954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.205968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.205988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.206001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.206022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.206034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.206057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.206071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.206091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.206108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.206127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.206142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.206162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.206177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.206199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.206217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.206240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.206250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.206271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.206282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.207085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.207106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.207128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.207140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.207159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.207173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.207193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.207213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.207233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.207245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.207263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.207277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.207296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.643 [2024-11-20 08:24:30.207307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.207329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.643 [2024-11-20 08:24:30.207339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.207357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.643 [2024-11-20 08:24:30.207366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.207384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.643 [2024-11-20 08:24:30.207394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.207412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.643 [2024-11-20 08:24:30.207422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.207440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.643 [2024-11-20 08:24:30.207450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.207467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.643 [2024-11-20 08:24:30.207477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.643 [2024-11-20 08:24:30.207495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.643 [2024-11-20 08:24:30.207504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.207522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.207532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.207549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.207560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.207579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.207589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.207607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.207616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.207634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.207644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.207664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.207675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.207693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.207702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.207720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.207729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.207748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.207760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.207778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.207789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.207806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.207817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.207835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.207845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.207865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.207875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.207892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.207903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.207921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.207932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.207951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.207962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.207981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.207991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.644 [2024-11-20 08:24:30.208619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:32.644 [2024-11-20 08:24:30.208636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.208647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.208665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.208676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.208695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.208706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.208724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.208734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.208757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.208767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.208785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.208795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.208815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.208825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.208842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.208852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.208870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.208881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.208899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.208910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.209629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.209647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.209667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.209677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.209696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.209706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.209724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.209735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.209753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.209763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.209780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.209791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.209812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.209822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.209839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.209849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.209867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.209878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.209895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:125304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.209906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.209924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.209935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.209953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.209963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.209981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.209991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.210009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.210020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.210038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.210048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.210066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.210076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.210094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.210105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.210122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.210132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.210152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.210162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.210179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.210189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.210214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.210227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.210245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.210254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.210272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.210282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.210300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.210311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.210329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.210339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.210356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.210367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.210384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.210395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.210413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.210423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:32.645 [2024-11-20 08:24:30.210440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.645 [2024-11-20 08:24:30.210451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.210469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.210479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.210497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.210510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.210528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.210538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.210556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.210567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.210583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.210594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.210611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.210623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.210640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.210651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.210668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.210679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.210696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.210708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.210725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.210735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.210753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.210763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.210781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.646 [2024-11-20 08:24:30.210791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.210809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.646 [2024-11-20 08:24:30.210819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.210837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.210850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.210867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.210878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.210895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.210906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.210925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.210936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.210952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.210971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.210988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.211000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.211018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.211028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.211045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.211056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.211075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.211086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.211105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.211116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.211134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.211145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.211162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.211172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.211190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.211205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.211226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.211236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.211254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.211264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.211282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.211292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.211310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.211320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.211337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.211347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.211365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.211376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.211394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.211404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.211422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.211432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.211450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.211460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.211478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.211489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.212297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.212316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.212339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.212349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:32.646 [2024-11-20 08:24:30.212373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.646 [2024-11-20 08:24:30.212384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.212414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.212443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:125776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.212472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.212501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.212530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.647 [2024-11-20 08:24:30.212559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.647 [2024-11-20 08:24:30.212587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.647 [2024-11-20 08:24:30.212617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.647 [2024-11-20 08:24:30.212647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.647 [2024-11-20 08:24:30.212675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.647 [2024-11-20 08:24:30.212704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.647 [2024-11-20 08:24:30.212736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.212765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.212793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.212823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.212853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.212883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.212913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.212941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.212970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.212988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.212999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.213017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.213029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.213047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.213058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.213076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.213089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.213108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.213118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.213137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.213147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.213166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.213177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.213195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.213212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.213231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.213241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.213260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.213270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.213289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.213300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.213319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.213330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.213348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.213359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.213378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.647 [2024-11-20 08:24:30.213389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:32.647 [2024-11-20 08:24:30.213407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.213418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.213437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.213449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.213467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.213478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.213497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.213507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.213525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.213536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.213554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.213564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.213582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.213593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.213612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.213622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.213640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.213650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.213669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.213680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.213698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.213708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.213727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.213737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.213758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.213769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.213790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.213800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.213822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.213833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.213852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.213863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.213881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.213892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.213911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.213922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.213940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.213951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.213968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.213979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.213997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.214008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.214027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.214038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.214057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.214067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.214085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.214096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.214115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.214125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.214883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.214900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.214925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.214935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.214954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.214966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.214985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.214996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.215015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.215025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.215043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.215055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.215074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.215085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.215102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.215114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.215133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.215143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.215161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.215172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.215191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.215209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.215227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.215238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:32.648 [2024-11-20 08:24:30.215257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.648 [2024-11-20 08:24:30.215268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.215970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.215981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.216000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.216011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.216031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.216043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.216062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.216073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.216091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.216101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.216120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.649 [2024-11-20 08:24:30.216132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.216150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.649 [2024-11-20 08:24:30.216161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.216180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.216191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.216212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.216224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.216242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.216253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.216271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.216282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.216301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.216312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.216330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.216341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.216359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.216370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.216394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.649 [2024-11-20 08:24:30.216405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:32.649 [2024-11-20 08:24:30.216425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.216436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.216454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.216465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.216483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.216494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.216513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.216524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.216542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.216552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.216571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.216583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.216602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.216613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.216631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.216643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.216661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.216671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.216689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.216700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.216718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.216729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.216750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.216761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.216779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.216790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.216808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.216819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.217640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.217661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.217683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.217694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.217713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.217725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.217743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.217754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.217773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.217784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.217802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.217813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.217832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.217842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.217860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.217871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.217890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.217901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.217920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.650 [2024-11-20 08:24:30.217934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.217953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.650 [2024-11-20 08:24:30.217964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.217982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.650 [2024-11-20 08:24:30.217993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.218011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.650 [2024-11-20 08:24:30.218022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.218041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.650 [2024-11-20 08:24:30.218052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.218071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.650 [2024-11-20 08:24:30.218081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.218100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.650 [2024-11-20 08:24:30.218111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.218129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.218139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.218158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.218169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.218188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.218199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.218224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.218235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.218253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.218263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.218282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.218296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.218315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.218326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.218343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.218354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:32.650 [2024-11-20 08:24:30.218373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.650 [2024-11-20 08:24:30.218384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.218413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.218442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.218471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.218501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.218531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.218561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.218592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.218621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.218650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.218683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.218712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.218742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.218771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.218799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.218829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.218859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.218888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.218917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.218946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.218975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.218994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.219005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.219025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.219036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.219054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.219065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.219083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.219093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.219112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.219123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.219143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.219154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.219171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.219183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.219205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.219217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.219236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.219246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.219265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.219276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.219295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.219305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:32.651 [2024-11-20 08:24:30.219324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.651 [2024-11-20 08:24:30.219335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.219353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.219364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.219382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.219396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.219414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.219424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.219443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.219453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.219472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.219482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.220976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.220986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.221005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.221017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.221035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.221046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.221064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.221075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.221093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.221104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.221123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.221133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:32.652 [2024-11-20 08:24:30.221151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.652 [2024-11-20 08:24:30.221162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.653 [2024-11-20 08:24:30.221520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.653 [2024-11-20 08:24:30.221549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.221984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.221996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.222004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.222016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.222024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.222036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:125704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.222044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.222056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.222064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.222640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.222655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.222670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.222678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.222690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.222698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.222711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.222719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.222732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.653 [2024-11-20 08:24:30.222742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:32.653 [2024-11-20 08:24:30.222755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:125760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.222764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.222776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.222784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.222797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.222805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.222817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.222825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.222838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.222846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.222858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.654 [2024-11-20 08:24:30.222866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.222879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.654 [2024-11-20 08:24:30.222887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.222899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.654 [2024-11-20 08:24:30.222907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.222919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.654 [2024-11-20 08:24:30.222926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.222939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.654 [2024-11-20 08:24:30.222947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.222961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.654 [2024-11-20 08:24:30.222968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.222981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.654 [2024-11-20 08:24:30.222988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:32.654 [2024-11-20 08:24:30.223558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.654 [2024-11-20 08:24:30.223566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.223579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.223587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.223600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.223608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.223622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.223630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.223643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.223649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.223662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.223669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.223682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.223689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.223703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.223711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.223725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.223734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.223746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.223754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.223766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.223775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.223787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.223795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.223808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.223816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.223829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.223836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.223848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.223856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.223868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.223876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.223890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.223897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.223910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.223917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.223931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.223938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.224473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.224486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.224501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.224509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.224522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.224530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.224543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.224553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.224566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.224573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.224586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.224595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.224607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.224615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.224627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.224635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.224648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.224656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.224668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.224675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.224688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.224696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.224708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.224716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.224729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.224736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.224749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.224757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.224770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.224778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.224791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.224799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.224813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.224821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.224834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.224841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.655 [2024-11-20 08:24:30.224854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.655 [2024-11-20 08:24:30.224861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.224874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.224882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.224895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.224902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.224917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.224925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.224937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.224945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.224958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.224966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.224979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.224987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.224999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.656 [2024-11-20 08:24:30.225379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.656 [2024-11-20 08:24:30.225407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.656 [2024-11-20 08:24:30.225538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.656 [2024-11-20 08:24:30.225546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.225558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.225568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.225582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.225590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.225602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.225610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.225623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.225630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.225643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.225650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.225663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.225671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.225683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.225691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.225703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.225711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.225724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.225732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.225745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.225753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.225765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.225773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.225786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.225794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.225806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.225815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.226386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.226408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.226428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.226448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.226469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.226488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.226509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.226529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:125776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.226550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.226570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.226596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.657 [2024-11-20 08:24:30.226617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.657 [2024-11-20 08:24:30.226640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.657 [2024-11-20 08:24:30.226661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.657 [2024-11-20 08:24:30.226681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.657 [2024-11-20 08:24:30.226701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.657 [2024-11-20 08:24:30.226723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.657 [2024-11-20 08:24:30.226744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.226764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.226785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.226806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:32.657 [2024-11-20 08:24:30.226819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.657 [2024-11-20 08:24:30.226826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.226839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.226846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.226858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.226866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.226882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.226889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.226902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.226909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.226921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.226928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.226941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.226949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.226962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.226971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.226983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.226989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:32.658 [2024-11-20 08:24:30.227580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.658 [2024-11-20 08:24:30.227588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.227601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.227608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.227621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.227628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.227644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.227651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:125352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.659 [2024-11-20 08:24:30.228913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:32.659 [2024-11-20 08:24:30.228926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.228939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.228952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.228960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.228972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.228980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.228994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.660 [2024-11-20 08:24:30.229128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.660 [2024-11-20 08:24:30.229147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.229984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.229997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.230004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.230016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.230024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.230037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:125704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.230048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.230060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.230068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.230081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.230089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:32.660 [2024-11-20 08:24:30.230102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.660 [2024-11-20 08:24:30.230109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:125752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:125760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:125776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.661 [2024-11-20 08:24:30.230306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.661 [2024-11-20 08:24:30.230326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.661 [2024-11-20 08:24:30.230346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.661 [2024-11-20 08:24:30.230367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.661 [2024-11-20 08:24:30.230391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.661 [2024-11-20 08:24:30.230412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.661 [2024-11-20 08:24:30.230431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:32.661 [2024-11-20 08:24:30.230901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.661 [2024-11-20 08:24:30.230909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.230921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.230929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.230942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.230951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.230969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.230976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.230989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.230996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.231983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.231990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.232003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.232010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.232023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.232030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.232043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.232052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.232064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.232072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.232084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.232092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.232105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.232113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.232126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.232133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:32.662 [2024-11-20 08:24:30.232145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.662 [2024-11-20 08:24:30.232153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.232165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.232174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.232186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.232194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.232211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.232220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.232232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.232240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.232252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.232260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.232275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.232284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.232297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.232304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.232317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.232324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.232338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.232345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.232358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.232366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.232378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.232386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.232398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.232406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.232418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.232428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.232442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.232449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.232462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.232469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.232889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.232901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.232915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.232924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.232936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.232945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.232960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.232970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.232984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.232994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.233008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.233018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.233036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.233046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.233059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.233067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.233080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.233088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.233101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.233109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.233124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.233132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.233144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.233152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.233164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.663 [2024-11-20 08:24:30.233172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.233185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.663 [2024-11-20 08:24:30.233193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.233211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.233219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.233231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.233239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.233252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.233269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.233282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.233289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.233303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.233311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.233324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.233333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.233345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.233353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.233366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.233374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.233388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.663 [2024-11-20 08:24:30.233397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:32.663 [2024-11-20 08:24:30.233423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.233430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.233443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.233450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.233463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.233473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.233486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.233494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.233508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.233517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.233531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.233540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.233554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.233562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.233578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.233588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.233601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.233609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.233622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.233630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.233643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.233650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.233664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.233671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.233685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.233694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.233707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.233714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.233727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.233735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.233749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.233756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.233769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:125752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.233776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.233790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.233801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.233815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.233823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.234184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:125776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.234197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.234217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.234226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.234239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.234246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.234259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.664 [2024-11-20 08:24:30.234267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.234280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.664 [2024-11-20 08:24:30.234290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.234303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.664 [2024-11-20 08:24:30.234310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.234323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.664 [2024-11-20 08:24:30.234331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.234344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.664 [2024-11-20 08:24:30.234352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.234364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.664 [2024-11-20 08:24:30.234371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.234384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.664 [2024-11-20 08:24:30.234391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.234404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.234411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.234424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.234432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.234444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.234452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.234465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.234472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.234486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.234493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.235012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.235022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.235036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.235048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.235062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.235069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.235082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.235089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:32.664 [2024-11-20 08:24:30.235102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.664 [2024-11-20 08:24:30.235110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:125024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:125032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:125040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:125048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:125056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:125064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:125072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:125080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:125088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:125096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:125104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:125112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:125120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:125128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:125136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:125144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:125152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:125160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:125168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:125176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:125184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.665 [2024-11-20 08:24:30.235960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:32.665 [2024-11-20 08:24:30.235973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:125192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.235982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.235995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:125200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.236002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.236014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:125208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.236022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.236034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:125216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.236042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.236054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:125224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.236062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.236075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:125232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.236082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.236094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:125240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.236102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.236117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:125248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.236125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.236138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:125256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.236147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.236159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:125264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.236168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.236181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:125272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.236194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.236213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:125280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.236223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.236235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:125288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.236246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:125296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:125304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:125312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:125320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:125328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:125336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:125344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:125352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:125360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:125368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:125376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:125384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:125392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:125400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:125408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:125416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:125424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:125432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:125440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:125448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:125456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:125464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:32.666 [2024-11-20 08:24:30.240769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:125472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.666 [2024-11-20 08:24:30.240777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.240791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:125480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.240799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.240817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:125488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.240826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.240840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:125496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.240848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.240865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:125504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.240873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.240890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:125512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.240897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.240912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:125520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.240920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.240935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:125528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.240943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.240958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:125536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.240966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.240981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:125544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.240988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:124784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.667 [2024-11-20 08:24:30.241011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:124792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.667 [2024-11-20 08:24:30.241034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:125552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:125560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:125568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:125576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:125584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:125592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:125600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:125608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:125616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:125624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:125632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:125640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:125648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:125656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:125664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:125672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:125680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:125688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:125696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:125704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:125712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:125720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:125728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:125736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:125744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:125752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:32.667 [2024-11-20 08:24:30.241666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:125760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.667 [2024-11-20 08:24:30.241676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.241691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:125768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.241699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.241713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:125776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.241721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.241737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:125784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.241745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.241760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:125792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.241768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.241783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:124800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.668 [2024-11-20 08:24:30.241791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.241806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:124808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.668 [2024-11-20 08:24:30.241815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.241831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:124816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.668 [2024-11-20 08:24:30.241839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.241853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:124824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.668 [2024-11-20 08:24:30.241861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.241876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:124832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.668 [2024-11-20 08:24:30.241884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.241899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:124840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.668 [2024-11-20 08:24:30.241906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.241922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:124848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.668 [2024-11-20 08:24:30.241930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.241945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:125800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.241952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.241968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:124856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.241976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.241992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:124864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.241998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.242013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:124872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.242021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.242036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:124880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.242043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.242058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:124888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.242065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.242080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:124896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.242087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.242102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:124904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.242111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.242125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:124912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.242133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.242148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:124920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.242156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.242171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:124928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.242178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.242193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:124936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.242343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.242360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:124944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.242368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.242385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:124952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.242393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.242407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:124960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.242415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.242430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:124968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.242438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.242454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:124976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.242462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.242476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:124984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.242484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.242500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:124992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.242508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.242523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:125000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.242530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.242546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:125008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.242553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:30.242700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:125016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:30.242711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:32.668 11282.31 IOPS, 44.07 MiB/s [2024-11-20T07:24:46.696Z] 10476.43 IOPS, 40.92 MiB/s [2024-11-20T07:24:46.696Z] 9778.00 IOPS, 38.20 MiB/s [2024-11-20T07:24:46.696Z] 9283.25 IOPS, 36.26 MiB/s [2024-11-20T07:24:46.696Z] 9405.53 IOPS, 36.74 MiB/s [2024-11-20T07:24:46.696Z] 9518.72 IOPS, 37.18 MiB/s [2024-11-20T07:24:46.696Z] 9691.11 IOPS, 37.86 MiB/s [2024-11-20T07:24:46.696Z] 9883.35 IOPS, 38.61 MiB/s [2024-11-20T07:24:46.696Z] 10052.57 IOPS, 39.27 MiB/s [2024-11-20T07:24:46.696Z] 10111.00 IOPS, 39.50 MiB/s [2024-11-20T07:24:46.696Z] 10159.48 IOPS, 39.69 MiB/s [2024-11-20T07:24:46.696Z] 10228.71 IOPS, 39.96 MiB/s [2024-11-20T07:24:46.696Z] 10362.92 IOPS, 40.48 MiB/s [2024-11-20T07:24:46.696Z] 10487.08 IOPS, 40.97 MiB/s [2024-11-20T07:24:46.696Z] [2024-11-20 08:24:43.927478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:43.927518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:43.927554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:4832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:43.927563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:43.927582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:4848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.668 [2024-11-20 08:24:43.927590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:32.668 [2024-11-20 08:24:43.927602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:4864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.927609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.927621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:4880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.927628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.927640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:4896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.927647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.927659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.927666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.927678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:4928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.927685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.927696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:4944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.927704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.927716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:4960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.927723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.927735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:4976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.927742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.927756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:4992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.927764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.927776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:5008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.927783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.927795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:5024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.927803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.927817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:5040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.927825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.927838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:5056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.927846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.927858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:5072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.927865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.927880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:32.669 [2024-11-20 08:24:43.927888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.929403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:5096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.929422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.929438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:5112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.929446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.929470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:5128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.929494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.929507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:5144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.929514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.929526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:5160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.929534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.929546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:5176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.929553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.929565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:5192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.929572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.929584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:5208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.929592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.929604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:5224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.929614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.929626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.929633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.929646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:5256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.929653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.929666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:5272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.929673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.929685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:5288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.929692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.929705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:5304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.929712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.929725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:5320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.929731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.929744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:5336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.929751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.929763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:5352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.929770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.929782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:5368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.929789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:32.669 [2024-11-20 08:24:43.929801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:5384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.669 [2024-11-20 08:24:43.929809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:32.670 [2024-11-20 08:24:43.929821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.670 [2024-11-20 08:24:43.929828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:32.670 [2024-11-20 08:24:43.929841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.670 [2024-11-20 08:24:43.929852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:32.670 [2024-11-20 08:24:43.929864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:5432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.670 [2024-11-20 08:24:43.929872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:32.670 [2024-11-20 08:24:43.929885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:5448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.670 [2024-11-20 08:24:43.929891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:32.670 [2024-11-20 08:24:43.929904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:5464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.670 [2024-11-20 08:24:43.929910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:27:32.670 [2024-11-20 08:24:43.929923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:5480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.670 [2024-11-20 08:24:43.929930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:32.670 [2024-11-20 08:24:43.929942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:5496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.670 [2024-11-20 08:24:43.929949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:27:32.670 [2024-11-20 08:24:43.929962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.670 [2024-11-20 08:24:43.929968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:27:32.670 [2024-11-20 08:24:43.929981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.670 [2024-11-20 08:24:43.929989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:32.670 [2024-11-20 08:24:43.930001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:5544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.670 [2024-11-20 08:24:43.930008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:27:32.670 [2024-11-20 08:24:43.930020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:5560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.670 [2024-11-20 08:24:43.930026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:32.670 [2024-11-20 08:24:43.930040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:32.670 [2024-11-20 08:24:43.930047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:27:32.670 10561.26 IOPS, 41.25 MiB/s [2024-11-20T07:24:46.698Z] 10590.39 IOPS, 41.37 MiB/s [2024-11-20T07:24:46.698Z] Received shutdown signal, test time was about 28.823924 seconds 00:27:32.670 00:27:32.670 Latency(us) 00:27:32.670 [2024-11-20T07:24:46.698Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.670 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:27:32.670 Verification LBA range: start 0x0 length 0x4000 00:27:32.670 Nvme0n1 : 28.82 10609.89 41.44 0.00 0.00 12044.05 522.73 3083812.08 00:27:32.670 [2024-11-20T07:24:46.698Z] =================================================================================================================== 00:27:32.670 [2024-11-20T07:24:46.698Z] Total : 10609.89 41.44 0.00 0.00 12044.05 522.73 3083812.08 00:27:32.670 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:32.670 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:27:32.670 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:32.670 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:27:32.670 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:32.670 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@99 -- # sync 00:27:32.670 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:32.670 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # set +e 00:27:32.670 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:32.670 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:32.929 rmmod nvme_tcp 00:27:32.929 rmmod nvme_fabrics 00:27:32.929 rmmod nvme_keyring 00:27:32.929 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:32.929 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # set -e 00:27:32.929 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # return 0 00:27:32.929 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # '[' -n 1810196 ']' 00:27:32.929 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@337 -- # killprocess 1810196 00:27:32.929 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 1810196 ']' 00:27:32.929 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 1810196 00:27:32.929 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:27:32.929 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:32.929 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1810196 00:27:32.929 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:32.929 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:32.929 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1810196' 00:27:32.929 killing process with pid 1810196 00:27:32.929 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 1810196 00:27:32.930 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 1810196 00:27:32.930 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:32.930 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # nvmf_fini 00:27:32.930 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@254 -- # local dev 00:27:32.930 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@257 -- # remove_target_ns 00:27:32.930 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:32.930 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:32.930 08:24:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@121 -- # return 0 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # _dev=0 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # dev_map=() 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@274 -- # iptr 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # iptables-save 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:35.466 08:24:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@548 -- # iptables-restore 00:27:35.466 00:27:35.466 real 0m40.664s 00:27:35.466 user 1m49.924s 00:27:35.466 sys 0m11.561s 00:27:35.466 08:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:35.466 08:24:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:27:35.466 ************************************ 00:27:35.466 END TEST nvmf_host_multipath_status 00:27:35.466 ************************************ 00:27:35.466 08:24:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:35.466 08:24:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:35.466 08:24:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:35.466 08:24:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.466 ************************************ 00:27:35.466 START TEST nvmf_discovery_remove_ifc 00:27:35.466 ************************************ 00:27:35.466 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:35.466 * Looking for test storage... 00:27:35.466 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:35.466 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:35.466 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:35.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.467 --rc genhtml_branch_coverage=1 00:27:35.467 --rc genhtml_function_coverage=1 00:27:35.467 --rc genhtml_legend=1 00:27:35.467 --rc geninfo_all_blocks=1 00:27:35.467 --rc geninfo_unexecuted_blocks=1 00:27:35.467 00:27:35.467 ' 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:35.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.467 --rc genhtml_branch_coverage=1 00:27:35.467 --rc genhtml_function_coverage=1 00:27:35.467 --rc genhtml_legend=1 00:27:35.467 --rc geninfo_all_blocks=1 00:27:35.467 --rc geninfo_unexecuted_blocks=1 00:27:35.467 00:27:35.467 ' 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:35.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.467 --rc genhtml_branch_coverage=1 00:27:35.467 --rc genhtml_function_coverage=1 00:27:35.467 --rc genhtml_legend=1 00:27:35.467 --rc geninfo_all_blocks=1 00:27:35.467 --rc geninfo_unexecuted_blocks=1 00:27:35.467 00:27:35.467 ' 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:35.467 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.467 --rc genhtml_branch_coverage=1 00:27:35.467 --rc genhtml_function_coverage=1 00:27:35.467 --rc genhtml_legend=1 00:27:35.467 --rc geninfo_all_blocks=1 00:27:35.467 --rc geninfo_unexecuted_blocks=1 00:27:35.467 00:27:35.467 ' 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@50 -- # : 0 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:35.467 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:35.467 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:35.468 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:27:35.468 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:27:35.468 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:35.468 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:35.468 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:35.468 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:27:35.468 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:27:35.468 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:35.468 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.468 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:35.468 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:35.468 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # remove_target_ns 00:27:35.468 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:35.468 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:35.468 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:35.468 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:35.468 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:35.468 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # xtrace_disable 00:27:35.468 08:24:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # pci_devs=() 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # net_devs=() 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # e810=() 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # local -ga e810 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # x722=() 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # local -ga x722 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # mlx=() 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # local -ga mlx 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:42.052 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:42.053 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:42.053 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:42.053 Found net devices under 0000:86:00.0: cvl_0_0 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:42.053 Found net devices under 0000:86:00.1: cvl_0_1 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # is_hw=yes 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@247 -- # create_target_ns 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@28 -- # local -g _dev 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:42.053 08:24:54 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772161 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:42.053 10.0.0.1 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772162 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:42.053 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:42.054 10.0.0.2 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:42.054 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:42.054 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.508 ms 00:27:42.054 00:27:42.054 --- 10.0.0.1 ping statistics --- 00:27:42.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.054 rtt min/avg/max/mdev = 0.508/0.508/0.508/0.000 ms 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target0 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:27:42.054 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:42.054 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:27:42.054 00:27:42.054 --- 10.0.0.2 ping statistics --- 00:27:42.054 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.054 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair++ )) 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # return 0 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator0 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=initiator1 00:27:42.054 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # return 1 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev= 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@160 -- # return 0 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target0 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target0 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # get_net_dev target1 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # local dev=target1 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # return 1 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@159 -- # dev= 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@160 -- # return 0 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:27:42.055 ' 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # nvmfpid=1819068 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # waitforlisten 1819068 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1819068 ']' 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:42.055 [2024-11-20 08:24:55.372817] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:27:42.055 [2024-11-20 08:24:55.372863] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.055 [2024-11-20 08:24:55.438094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.055 [2024-11-20 08:24:55.479824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.055 [2024-11-20 08:24:55.479862] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.055 [2024-11-20 08:24:55.479870] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:42.055 [2024-11-20 08:24:55.479876] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:42.055 [2024-11-20 08:24:55.479881] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.055 [2024-11-20 08:24:55.480445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:42.055 [2024-11-20 08:24:55.635153] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.055 [2024-11-20 08:24:55.643353] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:42.055 null0 00:27:42.055 [2024-11-20 08:24:55.675320] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1819257 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1819257 /tmp/host.sock 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 1819257 ']' 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:42.055 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:42.055 [2024-11-20 08:24:55.746144] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:27:42.055 [2024-11-20 08:24:55.746185] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1819257 ] 00:27:42.055 [2024-11-20 08:24:55.820085] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.055 [2024-11-20 08:24:55.863419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.055 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:42.056 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.056 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:42.056 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.056 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:42.056 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.056 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:42.056 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.056 08:24:55 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:43.436 [2024-11-20 08:24:57.042361] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:43.436 [2024-11-20 08:24:57.042380] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:43.436 [2024-11-20 08:24:57.042395] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:43.436 [2024-11-20 08:24:57.130667] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:43.436 [2024-11-20 08:24:57.354773] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:43.436 [2024-11-20 08:24:57.355508] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xf749f0:1 started. 00:27:43.436 [2024-11-20 08:24:57.356824] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:43.436 [2024-11-20 08:24:57.356860] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:43.436 [2024-11-20 08:24:57.356878] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:43.436 [2024-11-20 08:24:57.356891] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:43.436 [2024-11-20 08:24:57.356909] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:43.436 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.436 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:27:43.436 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:43.436 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:43.436 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:43.436 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.436 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:43.436 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:43.436 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:43.436 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.436 [2024-11-20 08:24:57.401748] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xf749f0 was disconnected and freed. delete nvme_qpair. 00:27:43.436 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:43.436 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec nvmf_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_1 00:27:43.436 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 down 00:27:43.695 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:27:43.695 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:43.695 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:43.695 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:43.695 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.695 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:43.695 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:43.695 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:43.695 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.695 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:43.695 08:24:57 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:44.631 08:24:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:44.631 08:24:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:44.631 08:24:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:44.631 08:24:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.631 08:24:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:44.631 08:24:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:44.631 08:24:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:44.631 08:24:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.631 08:24:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:44.631 08:24:58 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:46.009 08:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:46.009 08:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:46.009 08:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:46.009 08:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.009 08:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:46.009 08:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:46.009 08:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:46.009 08:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.009 08:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:46.009 08:24:59 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:46.945 08:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:46.945 08:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:46.945 08:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:46.945 08:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.945 08:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:46.945 08:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:46.945 08:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:46.945 08:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.945 08:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:46.945 08:25:00 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:47.883 08:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:47.884 08:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:47.884 08:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:47.884 08:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.884 08:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:47.884 08:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.884 08:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:47.884 08:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.884 08:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:47.884 08:25:01 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:48.820 08:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:48.820 08:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:48.820 08:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:48.820 08:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.820 08:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:48.820 08:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:48.820 08:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:48.820 08:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.820 [2024-11-20 08:25:02.798467] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:48.820 [2024-11-20 08:25:02.798503] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.820 [2024-11-20 08:25:02.798515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.820 [2024-11-20 08:25:02.798524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.820 [2024-11-20 08:25:02.798531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.820 [2024-11-20 08:25:02.798538] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.820 [2024-11-20 08:25:02.798545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.820 [2024-11-20 08:25:02.798552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.820 [2024-11-20 08:25:02.798558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.820 [2024-11-20 08:25:02.798566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.820 [2024-11-20 08:25:02.798573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.820 [2024-11-20 08:25:02.798584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf51220 is same with the state(6) to be set 00:27:48.820 [2024-11-20 08:25:02.808488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf51220 (9): Bad file descriptor 00:27:48.820 [2024-11-20 08:25:02.818522] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:48.820 [2024-11-20 08:25:02.818534] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:48.820 [2024-11-20 08:25:02.818539] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:48.820 [2024-11-20 08:25:02.818543] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:48.820 [2024-11-20 08:25:02.818564] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:48.820 08:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:48.820 08:25:02 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:50.199 08:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:50.199 08:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:50.199 08:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:50.199 08:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.199 08:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:50.199 08:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:50.199 08:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:50.199 [2024-11-20 08:25:03.879253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:50.199 [2024-11-20 08:25:03.879331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf51220 with addr=10.0.0.2, port=4420 00:27:50.199 [2024-11-20 08:25:03.879363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf51220 is same with the state(6) to be set 00:27:50.199 [2024-11-20 08:25:03.879412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf51220 (9): Bad file descriptor 00:27:50.199 [2024-11-20 08:25:03.880348] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:50.199 [2024-11-20 08:25:03.880412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:50.199 [2024-11-20 08:25:03.880437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:50.199 [2024-11-20 08:25:03.880460] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:50.199 [2024-11-20 08:25:03.880480] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:50.199 [2024-11-20 08:25:03.880496] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:50.199 [2024-11-20 08:25:03.880510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:50.199 [2024-11-20 08:25:03.880534] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:50.199 [2024-11-20 08:25:03.880548] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:50.199 08:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.199 08:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:27:50.199 08:25:03 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:51.137 [2024-11-20 08:25:04.883061] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:51.137 [2024-11-20 08:25:04.883081] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:51.137 [2024-11-20 08:25:04.883092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:51.137 [2024-11-20 08:25:04.883099] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:51.137 [2024-11-20 08:25:04.883106] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:51.137 [2024-11-20 08:25:04.883112] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:51.137 [2024-11-20 08:25:04.883117] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:51.137 [2024-11-20 08:25:04.883121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:51.137 [2024-11-20 08:25:04.883139] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:51.137 [2024-11-20 08:25:04.883157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.138 [2024-11-20 08:25:04.883165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.138 [2024-11-20 08:25:04.883174] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.138 [2024-11-20 08:25:04.883181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.138 [2024-11-20 08:25:04.883188] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.138 [2024-11-20 08:25:04.883194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.138 [2024-11-20 08:25:04.883205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.138 [2024-11-20 08:25:04.883212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.138 [2024-11-20 08:25:04.883219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:51.138 [2024-11-20 08:25:04.883226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:51.138 [2024-11-20 08:25:04.883233] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:51.138 [2024-11-20 08:25:04.883650] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf40900 (9): Bad file descriptor 00:27:51.138 [2024-11-20 08:25:04.884660] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:51.138 [2024-11-20 08:25:04.884673] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:51.138 08:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:51.138 08:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:51.138 08:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:51.138 08:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.138 08:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:51.138 08:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:51.138 08:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:51.138 08:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.138 08:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:27:51.138 08:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:51.138 08:25:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:51.138 08:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:27:51.138 08:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:51.138 08:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:51.138 08:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:51.138 08:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.138 08:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:51.138 08:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:51.138 08:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:51.138 08:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:51.138 08:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:51.138 08:25:05 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:52.074 08:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:52.074 08:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:52.074 08:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:52.074 08:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:52.074 08:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:52.074 08:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:52.074 08:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:52.074 08:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.331 08:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:52.331 08:25:06 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:52.896 [2024-11-20 08:25:06.896748] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:52.896 [2024-11-20 08:25:06.896765] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:52.896 [2024-11-20 08:25:06.896776] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:53.154 [2024-11-20 08:25:06.983041] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:53.154 08:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:53.154 08:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:53.154 08:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:53.154 08:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.154 08:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:53.154 08:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:53.154 08:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:53.154 08:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.413 08:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:53.413 08:25:07 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:27:53.413 [2024-11-20 08:25:07.200184] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:53.413 [2024-11-20 08:25:07.200787] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xf50080:1 started. 00:27:53.413 [2024-11-20 08:25:07.201808] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:53.413 [2024-11-20 08:25:07.201836] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:53.413 [2024-11-20 08:25:07.201852] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:53.413 [2024-11-20 08:25:07.201863] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:53.413 [2024-11-20 08:25:07.201870] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:53.413 [2024-11-20 08:25:07.205959] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xf50080 was disconnected and freed. delete nvme_qpair. 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1819257 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1819257 ']' 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1819257 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1819257 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1819257' 00:27:54.349 killing process with pid 1819257 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1819257 00:27:54.349 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1819257 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@99 -- # sync 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@102 -- # set +e 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:54.609 rmmod nvme_tcp 00:27:54.609 rmmod nvme_fabrics 00:27:54.609 rmmod nvme_keyring 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@106 -- # set -e 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@107 -- # return 0 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # '[' -n 1819068 ']' 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@337 -- # killprocess 1819068 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 1819068 ']' 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 1819068 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1819068 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1819068' 00:27:54.609 killing process with pid 1819068 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 1819068 00:27:54.609 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 1819068 00:27:54.868 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:54.868 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # nvmf_fini 00:27:54.868 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@254 -- # local dev 00:27:54.868 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@257 -- # remove_target_ns 00:27:54.868 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:54.868 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:54.868 08:25:08 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:56.773 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@258 -- # delete_main_bridge 00:27:56.773 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:56.773 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@121 -- # return 0 00:27:56.773 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:56.773 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:56.773 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # _dev=0 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # dev_map=() 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@274 -- # iptr 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # iptables-save 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@548 -- # iptables-restore 00:27:56.774 00:27:56.774 real 0m21.714s 00:27:56.774 user 0m27.115s 00:27:56.774 sys 0m5.857s 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:56.774 08:25:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:56.774 ************************************ 00:27:56.774 END TEST nvmf_discovery_remove_ifc 00:27:56.774 ************************************ 00:27:57.033 08:25:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:57.033 08:25:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:57.033 08:25:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:57.033 08:25:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:57.033 ************************************ 00:27:57.033 START TEST nvmf_identify_kernel_target 00:27:57.033 ************************************ 00:27:57.033 08:25:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:27:57.033 * Looking for test storage... 00:27:57.033 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:57.033 08:25:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:57.033 08:25:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:27:57.033 08:25:10 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:57.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.033 --rc genhtml_branch_coverage=1 00:27:57.033 --rc genhtml_function_coverage=1 00:27:57.033 --rc genhtml_legend=1 00:27:57.033 --rc geninfo_all_blocks=1 00:27:57.033 --rc geninfo_unexecuted_blocks=1 00:27:57.033 00:27:57.033 ' 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:57.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.033 --rc genhtml_branch_coverage=1 00:27:57.033 --rc genhtml_function_coverage=1 00:27:57.033 --rc genhtml_legend=1 00:27:57.033 --rc geninfo_all_blocks=1 00:27:57.033 --rc geninfo_unexecuted_blocks=1 00:27:57.033 00:27:57.033 ' 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:57.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.033 --rc genhtml_branch_coverage=1 00:27:57.033 --rc genhtml_function_coverage=1 00:27:57.033 --rc genhtml_legend=1 00:27:57.033 --rc geninfo_all_blocks=1 00:27:57.033 --rc geninfo_unexecuted_blocks=1 00:27:57.033 00:27:57.033 ' 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:57.033 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.033 --rc genhtml_branch_coverage=1 00:27:57.033 --rc genhtml_function_coverage=1 00:27:57.033 --rc genhtml_legend=1 00:27:57.033 --rc geninfo_all_blocks=1 00:27:57.033 --rc geninfo_unexecuted_blocks=1 00:27:57.033 00:27:57.033 ' 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:57.033 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:57.034 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:57.034 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:57.034 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:57.034 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:57.034 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:57.034 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:27:57.034 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:57.034 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:57.034 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:27:57.293 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.293 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.293 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.293 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.293 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.293 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@50 -- # : 0 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:57.294 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # remove_target_ns 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # xtrace_disable 00:27:57.294 08:25:11 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@131 -- # pci_devs=() 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@135 -- # net_devs=() 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@136 -- # e810=() 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@136 -- # local -ga e810 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@137 -- # x722=() 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@137 -- # local -ga x722 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@138 -- # mlx=() 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@138 -- # local -ga mlx 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.867 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:03.868 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:03.868 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:03.868 Found net devices under 0000:86:00.0: cvl_0_0 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:03.868 Found net devices under 0000:86:00.1: cvl_0_1 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # is_hw=yes 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@247 -- # create_target_ns 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@28 -- # local -g _dev 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # ips=() 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772161 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:03.868 10.0.0.1 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:03.868 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:03.869 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:28:03.869 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772162 00:28:03.869 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:03.869 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:28:03.869 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:03.869 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:03.869 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:03.869 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:28:03.869 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:03.869 10.0.0.2 00:28:03.869 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:28:03.869 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:28:03.869 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:03.869 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:28:03.869 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:28:03.869 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:03.869 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:03.869 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:03.869 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:03.869 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:03.869 08:25:16 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:03.869 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:03.869 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.374 ms 00:28:03.869 00:28:03.869 --- 10.0.0.1 ping statistics --- 00:28:03.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.869 rtt min/avg/max/mdev = 0.374/0.374/0.374/0.000 ms 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target0 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:28:03.869 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:03.869 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:28:03.869 00:28:03.869 --- 10.0.0.2 ping statistics --- 00:28:03.869 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.869 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # return 0 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:03.869 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # return 1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev= 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@160 -- # return 0 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target0 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=target1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # return 1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev= 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@160 -- # return 0 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:28:03.870 ' 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # local block nvme 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # modprobe nvmet 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:03.870 08:25:17 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:06.408 Waiting for block devices as requested 00:28:06.408 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:06.408 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:06.408 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:06.408 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:06.408 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:06.408 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:06.667 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:06.667 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:06.667 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:06.926 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:06.926 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:06.926 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:06.926 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:07.185 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:07.185 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:07.185 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:07.444 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:07.444 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:28:07.444 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:07.444 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:28:07.444 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:07.444 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:07.444 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:07.444 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:28:07.444 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:07.445 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:07.445 No valid GPT data, bailing 00:28:07.445 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:07.445 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:28:07.445 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:28:07.445 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:28:07.445 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:28:07.445 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:07.445 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:07.445 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:07.445 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:28:07.445 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # echo 1 00:28:07.445 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:28:07.445 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@471 -- # echo 1 00:28:07.445 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:28:07.445 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # echo tcp 00:28:07.445 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@475 -- # echo 4420 00:28:07.445 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # echo ipv4 00:28:07.445 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:07.445 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:07.705 00:28:07.705 Discovery Log Number of Records 2, Generation counter 2 00:28:07.705 =====Discovery Log Entry 0====== 00:28:07.705 trtype: tcp 00:28:07.705 adrfam: ipv4 00:28:07.705 subtype: current discovery subsystem 00:28:07.705 treq: not specified, sq flow control disable supported 00:28:07.705 portid: 1 00:28:07.705 trsvcid: 4420 00:28:07.705 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:07.705 traddr: 10.0.0.1 00:28:07.705 eflags: none 00:28:07.705 sectype: none 00:28:07.705 =====Discovery Log Entry 1====== 00:28:07.705 trtype: tcp 00:28:07.705 adrfam: ipv4 00:28:07.705 subtype: nvme subsystem 00:28:07.705 treq: not specified, sq flow control disable supported 00:28:07.705 portid: 1 00:28:07.705 trsvcid: 4420 00:28:07.705 subnqn: nqn.2016-06.io.spdk:testnqn 00:28:07.705 traddr: 10.0.0.1 00:28:07.705 eflags: none 00:28:07.705 sectype: none 00:28:07.705 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:28:07.705 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:28:07.705 ===================================================== 00:28:07.705 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:07.705 ===================================================== 00:28:07.705 Controller Capabilities/Features 00:28:07.705 ================================ 00:28:07.705 Vendor ID: 0000 00:28:07.705 Subsystem Vendor ID: 0000 00:28:07.705 Serial Number: 5adba46a374367b99c01 00:28:07.705 Model Number: Linux 00:28:07.705 Firmware Version: 6.8.9-20 00:28:07.705 Recommended Arb Burst: 0 00:28:07.705 IEEE OUI Identifier: 00 00 00 00:28:07.705 Multi-path I/O 00:28:07.705 May have multiple subsystem ports: No 00:28:07.705 May have multiple controllers: No 00:28:07.705 Associated with SR-IOV VF: No 00:28:07.705 Max Data Transfer Size: Unlimited 00:28:07.705 Max Number of Namespaces: 0 00:28:07.705 Max Number of I/O Queues: 1024 00:28:07.705 NVMe Specification Version (VS): 1.3 00:28:07.705 NVMe Specification Version (Identify): 1.3 00:28:07.705 Maximum Queue Entries: 1024 00:28:07.705 Contiguous Queues Required: No 00:28:07.705 Arbitration Mechanisms Supported 00:28:07.705 Weighted Round Robin: Not Supported 00:28:07.705 Vendor Specific: Not Supported 00:28:07.705 Reset Timeout: 7500 ms 00:28:07.705 Doorbell Stride: 4 bytes 00:28:07.705 NVM Subsystem Reset: Not Supported 00:28:07.705 Command Sets Supported 00:28:07.705 NVM Command Set: Supported 00:28:07.705 Boot Partition: Not Supported 00:28:07.705 Memory Page Size Minimum: 4096 bytes 00:28:07.705 Memory Page Size Maximum: 4096 bytes 00:28:07.705 Persistent Memory Region: Not Supported 00:28:07.705 Optional Asynchronous Events Supported 00:28:07.705 Namespace Attribute Notices: Not Supported 00:28:07.705 Firmware Activation Notices: Not Supported 00:28:07.705 ANA Change Notices: Not Supported 00:28:07.705 PLE Aggregate Log Change Notices: Not Supported 00:28:07.705 LBA Status Info Alert Notices: Not Supported 00:28:07.705 EGE Aggregate Log Change Notices: Not Supported 00:28:07.705 Normal NVM Subsystem Shutdown event: Not Supported 00:28:07.705 Zone Descriptor Change Notices: Not Supported 00:28:07.705 Discovery Log Change Notices: Supported 00:28:07.705 Controller Attributes 00:28:07.705 128-bit Host Identifier: Not Supported 00:28:07.705 Non-Operational Permissive Mode: Not Supported 00:28:07.705 NVM Sets: Not Supported 00:28:07.705 Read Recovery Levels: Not Supported 00:28:07.705 Endurance Groups: Not Supported 00:28:07.705 Predictable Latency Mode: Not Supported 00:28:07.705 Traffic Based Keep ALive: Not Supported 00:28:07.705 Namespace Granularity: Not Supported 00:28:07.705 SQ Associations: Not Supported 00:28:07.705 UUID List: Not Supported 00:28:07.705 Multi-Domain Subsystem: Not Supported 00:28:07.705 Fixed Capacity Management: Not Supported 00:28:07.705 Variable Capacity Management: Not Supported 00:28:07.705 Delete Endurance Group: Not Supported 00:28:07.705 Delete NVM Set: Not Supported 00:28:07.705 Extended LBA Formats Supported: Not Supported 00:28:07.706 Flexible Data Placement Supported: Not Supported 00:28:07.706 00:28:07.706 Controller Memory Buffer Support 00:28:07.706 ================================ 00:28:07.706 Supported: No 00:28:07.706 00:28:07.706 Persistent Memory Region Support 00:28:07.706 ================================ 00:28:07.706 Supported: No 00:28:07.706 00:28:07.706 Admin Command Set Attributes 00:28:07.706 ============================ 00:28:07.706 Security Send/Receive: Not Supported 00:28:07.706 Format NVM: Not Supported 00:28:07.706 Firmware Activate/Download: Not Supported 00:28:07.706 Namespace Management: Not Supported 00:28:07.706 Device Self-Test: Not Supported 00:28:07.706 Directives: Not Supported 00:28:07.706 NVMe-MI: Not Supported 00:28:07.706 Virtualization Management: Not Supported 00:28:07.706 Doorbell Buffer Config: Not Supported 00:28:07.706 Get LBA Status Capability: Not Supported 00:28:07.706 Command & Feature Lockdown Capability: Not Supported 00:28:07.706 Abort Command Limit: 1 00:28:07.706 Async Event Request Limit: 1 00:28:07.706 Number of Firmware Slots: N/A 00:28:07.706 Firmware Slot 1 Read-Only: N/A 00:28:07.706 Firmware Activation Without Reset: N/A 00:28:07.706 Multiple Update Detection Support: N/A 00:28:07.706 Firmware Update Granularity: No Information Provided 00:28:07.706 Per-Namespace SMART Log: No 00:28:07.706 Asymmetric Namespace Access Log Page: Not Supported 00:28:07.706 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:07.706 Command Effects Log Page: Not Supported 00:28:07.706 Get Log Page Extended Data: Supported 00:28:07.706 Telemetry Log Pages: Not Supported 00:28:07.706 Persistent Event Log Pages: Not Supported 00:28:07.706 Supported Log Pages Log Page: May Support 00:28:07.706 Commands Supported & Effects Log Page: Not Supported 00:28:07.706 Feature Identifiers & Effects Log Page:May Support 00:28:07.706 NVMe-MI Commands & Effects Log Page: May Support 00:28:07.706 Data Area 4 for Telemetry Log: Not Supported 00:28:07.706 Error Log Page Entries Supported: 1 00:28:07.706 Keep Alive: Not Supported 00:28:07.706 00:28:07.706 NVM Command Set Attributes 00:28:07.706 ========================== 00:28:07.706 Submission Queue Entry Size 00:28:07.706 Max: 1 00:28:07.706 Min: 1 00:28:07.706 Completion Queue Entry Size 00:28:07.706 Max: 1 00:28:07.706 Min: 1 00:28:07.706 Number of Namespaces: 0 00:28:07.706 Compare Command: Not Supported 00:28:07.706 Write Uncorrectable Command: Not Supported 00:28:07.706 Dataset Management Command: Not Supported 00:28:07.706 Write Zeroes Command: Not Supported 00:28:07.706 Set Features Save Field: Not Supported 00:28:07.706 Reservations: Not Supported 00:28:07.706 Timestamp: Not Supported 00:28:07.706 Copy: Not Supported 00:28:07.706 Volatile Write Cache: Not Present 00:28:07.706 Atomic Write Unit (Normal): 1 00:28:07.706 Atomic Write Unit (PFail): 1 00:28:07.706 Atomic Compare & Write Unit: 1 00:28:07.706 Fused Compare & Write: Not Supported 00:28:07.706 Scatter-Gather List 00:28:07.706 SGL Command Set: Supported 00:28:07.706 SGL Keyed: Not Supported 00:28:07.706 SGL Bit Bucket Descriptor: Not Supported 00:28:07.706 SGL Metadata Pointer: Not Supported 00:28:07.706 Oversized SGL: Not Supported 00:28:07.706 SGL Metadata Address: Not Supported 00:28:07.706 SGL Offset: Supported 00:28:07.706 Transport SGL Data Block: Not Supported 00:28:07.706 Replay Protected Memory Block: Not Supported 00:28:07.706 00:28:07.706 Firmware Slot Information 00:28:07.706 ========================= 00:28:07.706 Active slot: 0 00:28:07.706 00:28:07.706 00:28:07.706 Error Log 00:28:07.706 ========= 00:28:07.706 00:28:07.706 Active Namespaces 00:28:07.706 ================= 00:28:07.706 Discovery Log Page 00:28:07.706 ================== 00:28:07.706 Generation Counter: 2 00:28:07.706 Number of Records: 2 00:28:07.706 Record Format: 0 00:28:07.706 00:28:07.706 Discovery Log Entry 0 00:28:07.706 ---------------------- 00:28:07.706 Transport Type: 3 (TCP) 00:28:07.706 Address Family: 1 (IPv4) 00:28:07.706 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:07.706 Entry Flags: 00:28:07.706 Duplicate Returned Information: 0 00:28:07.706 Explicit Persistent Connection Support for Discovery: 0 00:28:07.706 Transport Requirements: 00:28:07.706 Secure Channel: Not Specified 00:28:07.706 Port ID: 1 (0x0001) 00:28:07.706 Controller ID: 65535 (0xffff) 00:28:07.706 Admin Max SQ Size: 32 00:28:07.706 Transport Service Identifier: 4420 00:28:07.706 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:07.706 Transport Address: 10.0.0.1 00:28:07.706 Discovery Log Entry 1 00:28:07.706 ---------------------- 00:28:07.706 Transport Type: 3 (TCP) 00:28:07.706 Address Family: 1 (IPv4) 00:28:07.706 Subsystem Type: 2 (NVM Subsystem) 00:28:07.706 Entry Flags: 00:28:07.706 Duplicate Returned Information: 0 00:28:07.706 Explicit Persistent Connection Support for Discovery: 0 00:28:07.706 Transport Requirements: 00:28:07.706 Secure Channel: Not Specified 00:28:07.706 Port ID: 1 (0x0001) 00:28:07.706 Controller ID: 65535 (0xffff) 00:28:07.706 Admin Max SQ Size: 32 00:28:07.706 Transport Service Identifier: 4420 00:28:07.706 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:28:07.706 Transport Address: 10.0.0.1 00:28:07.706 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:28:07.706 get_feature(0x01) failed 00:28:07.706 get_feature(0x02) failed 00:28:07.706 get_feature(0x04) failed 00:28:07.706 ===================================================== 00:28:07.706 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:28:07.706 ===================================================== 00:28:07.706 Controller Capabilities/Features 00:28:07.706 ================================ 00:28:07.706 Vendor ID: 0000 00:28:07.706 Subsystem Vendor ID: 0000 00:28:07.706 Serial Number: 3b1a5011d6f80511bd06 00:28:07.706 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:28:07.706 Firmware Version: 6.8.9-20 00:28:07.706 Recommended Arb Burst: 6 00:28:07.706 IEEE OUI Identifier: 00 00 00 00:28:07.706 Multi-path I/O 00:28:07.706 May have multiple subsystem ports: Yes 00:28:07.706 May have multiple controllers: Yes 00:28:07.706 Associated with SR-IOV VF: No 00:28:07.706 Max Data Transfer Size: Unlimited 00:28:07.706 Max Number of Namespaces: 1024 00:28:07.706 Max Number of I/O Queues: 128 00:28:07.706 NVMe Specification Version (VS): 1.3 00:28:07.706 NVMe Specification Version (Identify): 1.3 00:28:07.706 Maximum Queue Entries: 1024 00:28:07.706 Contiguous Queues Required: No 00:28:07.706 Arbitration Mechanisms Supported 00:28:07.706 Weighted Round Robin: Not Supported 00:28:07.706 Vendor Specific: Not Supported 00:28:07.706 Reset Timeout: 7500 ms 00:28:07.706 Doorbell Stride: 4 bytes 00:28:07.706 NVM Subsystem Reset: Not Supported 00:28:07.706 Command Sets Supported 00:28:07.706 NVM Command Set: Supported 00:28:07.706 Boot Partition: Not Supported 00:28:07.706 Memory Page Size Minimum: 4096 bytes 00:28:07.706 Memory Page Size Maximum: 4096 bytes 00:28:07.706 Persistent Memory Region: Not Supported 00:28:07.706 Optional Asynchronous Events Supported 00:28:07.706 Namespace Attribute Notices: Supported 00:28:07.706 Firmware Activation Notices: Not Supported 00:28:07.706 ANA Change Notices: Supported 00:28:07.706 PLE Aggregate Log Change Notices: Not Supported 00:28:07.706 LBA Status Info Alert Notices: Not Supported 00:28:07.706 EGE Aggregate Log Change Notices: Not Supported 00:28:07.706 Normal NVM Subsystem Shutdown event: Not Supported 00:28:07.706 Zone Descriptor Change Notices: Not Supported 00:28:07.706 Discovery Log Change Notices: Not Supported 00:28:07.706 Controller Attributes 00:28:07.706 128-bit Host Identifier: Supported 00:28:07.706 Non-Operational Permissive Mode: Not Supported 00:28:07.706 NVM Sets: Not Supported 00:28:07.706 Read Recovery Levels: Not Supported 00:28:07.706 Endurance Groups: Not Supported 00:28:07.706 Predictable Latency Mode: Not Supported 00:28:07.706 Traffic Based Keep ALive: Supported 00:28:07.706 Namespace Granularity: Not Supported 00:28:07.706 SQ Associations: Not Supported 00:28:07.706 UUID List: Not Supported 00:28:07.706 Multi-Domain Subsystem: Not Supported 00:28:07.706 Fixed Capacity Management: Not Supported 00:28:07.706 Variable Capacity Management: Not Supported 00:28:07.706 Delete Endurance Group: Not Supported 00:28:07.706 Delete NVM Set: Not Supported 00:28:07.706 Extended LBA Formats Supported: Not Supported 00:28:07.707 Flexible Data Placement Supported: Not Supported 00:28:07.707 00:28:07.707 Controller Memory Buffer Support 00:28:07.707 ================================ 00:28:07.707 Supported: No 00:28:07.707 00:28:07.707 Persistent Memory Region Support 00:28:07.707 ================================ 00:28:07.707 Supported: No 00:28:07.707 00:28:07.707 Admin Command Set Attributes 00:28:07.707 ============================ 00:28:07.707 Security Send/Receive: Not Supported 00:28:07.707 Format NVM: Not Supported 00:28:07.707 Firmware Activate/Download: Not Supported 00:28:07.707 Namespace Management: Not Supported 00:28:07.707 Device Self-Test: Not Supported 00:28:07.707 Directives: Not Supported 00:28:07.707 NVMe-MI: Not Supported 00:28:07.707 Virtualization Management: Not Supported 00:28:07.707 Doorbell Buffer Config: Not Supported 00:28:07.707 Get LBA Status Capability: Not Supported 00:28:07.707 Command & Feature Lockdown Capability: Not Supported 00:28:07.707 Abort Command Limit: 4 00:28:07.707 Async Event Request Limit: 4 00:28:07.707 Number of Firmware Slots: N/A 00:28:07.707 Firmware Slot 1 Read-Only: N/A 00:28:07.707 Firmware Activation Without Reset: N/A 00:28:07.707 Multiple Update Detection Support: N/A 00:28:07.707 Firmware Update Granularity: No Information Provided 00:28:07.707 Per-Namespace SMART Log: Yes 00:28:07.707 Asymmetric Namespace Access Log Page: Supported 00:28:07.707 ANA Transition Time : 10 sec 00:28:07.707 00:28:07.707 Asymmetric Namespace Access Capabilities 00:28:07.707 ANA Optimized State : Supported 00:28:07.707 ANA Non-Optimized State : Supported 00:28:07.707 ANA Inaccessible State : Supported 00:28:07.707 ANA Persistent Loss State : Supported 00:28:07.707 ANA Change State : Supported 00:28:07.707 ANAGRPID is not changed : No 00:28:07.707 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:28:07.707 00:28:07.707 ANA Group Identifier Maximum : 128 00:28:07.707 Number of ANA Group Identifiers : 128 00:28:07.707 Max Number of Allowed Namespaces : 1024 00:28:07.707 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:28:07.707 Command Effects Log Page: Supported 00:28:07.707 Get Log Page Extended Data: Supported 00:28:07.707 Telemetry Log Pages: Not Supported 00:28:07.707 Persistent Event Log Pages: Not Supported 00:28:07.707 Supported Log Pages Log Page: May Support 00:28:07.707 Commands Supported & Effects Log Page: Not Supported 00:28:07.707 Feature Identifiers & Effects Log Page:May Support 00:28:07.707 NVMe-MI Commands & Effects Log Page: May Support 00:28:07.707 Data Area 4 for Telemetry Log: Not Supported 00:28:07.707 Error Log Page Entries Supported: 128 00:28:07.707 Keep Alive: Supported 00:28:07.707 Keep Alive Granularity: 1000 ms 00:28:07.707 00:28:07.707 NVM Command Set Attributes 00:28:07.707 ========================== 00:28:07.707 Submission Queue Entry Size 00:28:07.707 Max: 64 00:28:07.707 Min: 64 00:28:07.707 Completion Queue Entry Size 00:28:07.707 Max: 16 00:28:07.707 Min: 16 00:28:07.707 Number of Namespaces: 1024 00:28:07.707 Compare Command: Not Supported 00:28:07.707 Write Uncorrectable Command: Not Supported 00:28:07.707 Dataset Management Command: Supported 00:28:07.707 Write Zeroes Command: Supported 00:28:07.707 Set Features Save Field: Not Supported 00:28:07.707 Reservations: Not Supported 00:28:07.707 Timestamp: Not Supported 00:28:07.707 Copy: Not Supported 00:28:07.707 Volatile Write Cache: Present 00:28:07.707 Atomic Write Unit (Normal): 1 00:28:07.707 Atomic Write Unit (PFail): 1 00:28:07.707 Atomic Compare & Write Unit: 1 00:28:07.707 Fused Compare & Write: Not Supported 00:28:07.707 Scatter-Gather List 00:28:07.707 SGL Command Set: Supported 00:28:07.707 SGL Keyed: Not Supported 00:28:07.707 SGL Bit Bucket Descriptor: Not Supported 00:28:07.707 SGL Metadata Pointer: Not Supported 00:28:07.707 Oversized SGL: Not Supported 00:28:07.707 SGL Metadata Address: Not Supported 00:28:07.707 SGL Offset: Supported 00:28:07.707 Transport SGL Data Block: Not Supported 00:28:07.707 Replay Protected Memory Block: Not Supported 00:28:07.707 00:28:07.707 Firmware Slot Information 00:28:07.707 ========================= 00:28:07.707 Active slot: 0 00:28:07.707 00:28:07.707 Asymmetric Namespace Access 00:28:07.707 =========================== 00:28:07.707 Change Count : 0 00:28:07.707 Number of ANA Group Descriptors : 1 00:28:07.707 ANA Group Descriptor : 0 00:28:07.707 ANA Group ID : 1 00:28:07.707 Number of NSID Values : 1 00:28:07.707 Change Count : 0 00:28:07.707 ANA State : 1 00:28:07.707 Namespace Identifier : 1 00:28:07.707 00:28:07.707 Commands Supported and Effects 00:28:07.707 ============================== 00:28:07.707 Admin Commands 00:28:07.707 -------------- 00:28:07.707 Get Log Page (02h): Supported 00:28:07.707 Identify (06h): Supported 00:28:07.707 Abort (08h): Supported 00:28:07.707 Set Features (09h): Supported 00:28:07.707 Get Features (0Ah): Supported 00:28:07.707 Asynchronous Event Request (0Ch): Supported 00:28:07.707 Keep Alive (18h): Supported 00:28:07.707 I/O Commands 00:28:07.707 ------------ 00:28:07.707 Flush (00h): Supported 00:28:07.707 Write (01h): Supported LBA-Change 00:28:07.707 Read (02h): Supported 00:28:07.707 Write Zeroes (08h): Supported LBA-Change 00:28:07.707 Dataset Management (09h): Supported 00:28:07.707 00:28:07.707 Error Log 00:28:07.707 ========= 00:28:07.707 Entry: 0 00:28:07.707 Error Count: 0x3 00:28:07.707 Submission Queue Id: 0x0 00:28:07.707 Command Id: 0x5 00:28:07.707 Phase Bit: 0 00:28:07.707 Status Code: 0x2 00:28:07.707 Status Code Type: 0x0 00:28:07.707 Do Not Retry: 1 00:28:07.707 Error Location: 0x28 00:28:07.707 LBA: 0x0 00:28:07.707 Namespace: 0x0 00:28:07.707 Vendor Log Page: 0x0 00:28:07.707 ----------- 00:28:07.707 Entry: 1 00:28:07.707 Error Count: 0x2 00:28:07.707 Submission Queue Id: 0x0 00:28:07.707 Command Id: 0x5 00:28:07.707 Phase Bit: 0 00:28:07.707 Status Code: 0x2 00:28:07.707 Status Code Type: 0x0 00:28:07.707 Do Not Retry: 1 00:28:07.707 Error Location: 0x28 00:28:07.707 LBA: 0x0 00:28:07.707 Namespace: 0x0 00:28:07.707 Vendor Log Page: 0x0 00:28:07.707 ----------- 00:28:07.707 Entry: 2 00:28:07.707 Error Count: 0x1 00:28:07.707 Submission Queue Id: 0x0 00:28:07.707 Command Id: 0x4 00:28:07.707 Phase Bit: 0 00:28:07.707 Status Code: 0x2 00:28:07.707 Status Code Type: 0x0 00:28:07.707 Do Not Retry: 1 00:28:07.707 Error Location: 0x28 00:28:07.707 LBA: 0x0 00:28:07.707 Namespace: 0x0 00:28:07.707 Vendor Log Page: 0x0 00:28:07.707 00:28:07.707 Number of Queues 00:28:07.707 ================ 00:28:07.707 Number of I/O Submission Queues: 128 00:28:07.707 Number of I/O Completion Queues: 128 00:28:07.707 00:28:07.707 ZNS Specific Controller Data 00:28:07.707 ============================ 00:28:07.707 Zone Append Size Limit: 0 00:28:07.707 00:28:07.707 00:28:07.707 Active Namespaces 00:28:07.707 ================= 00:28:07.707 get_feature(0x05) failed 00:28:07.707 Namespace ID:1 00:28:07.707 Command Set Identifier: NVM (00h) 00:28:07.707 Deallocate: Supported 00:28:07.707 Deallocated/Unwritten Error: Not Supported 00:28:07.707 Deallocated Read Value: Unknown 00:28:07.707 Deallocate in Write Zeroes: Not Supported 00:28:07.707 Deallocated Guard Field: 0xFFFF 00:28:07.707 Flush: Supported 00:28:07.707 Reservation: Not Supported 00:28:07.707 Namespace Sharing Capabilities: Multiple Controllers 00:28:07.707 Size (in LBAs): 3125627568 (1490GiB) 00:28:07.707 Capacity (in LBAs): 3125627568 (1490GiB) 00:28:07.707 Utilization (in LBAs): 3125627568 (1490GiB) 00:28:07.707 UUID: 2f81f174-6560-4050-a0c8-76ea16d33860 00:28:07.707 Thin Provisioning: Not Supported 00:28:07.707 Per-NS Atomic Units: Yes 00:28:07.707 Atomic Boundary Size (Normal): 0 00:28:07.707 Atomic Boundary Size (PFail): 0 00:28:07.707 Atomic Boundary Offset: 0 00:28:07.707 NGUID/EUI64 Never Reused: No 00:28:07.707 ANA group ID: 1 00:28:07.707 Namespace Write Protected: No 00:28:07.707 Number of LBA Formats: 1 00:28:07.707 Current LBA Format: LBA Format #00 00:28:07.707 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:07.707 00:28:07.707 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:28:07.707 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:07.707 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@99 -- # sync 00:28:07.708 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:07.708 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # set +e 00:28:07.708 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:07.708 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:07.708 rmmod nvme_tcp 00:28:07.708 rmmod nvme_fabrics 00:28:07.967 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:07.967 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # set -e 00:28:07.967 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # return 0 00:28:07.967 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:28:07.967 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:07.967 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # nvmf_fini 00:28:07.967 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@254 -- # local dev 00:28:07.967 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:28:07.967 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:07.967 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:07.967 08:25:21 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@121 -- # return 0 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # _dev=0 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # dev_map=() 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@274 -- # iptr 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # iptables-save 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@548 -- # iptables-restore 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # echo 0 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:28:09.892 08:25:23 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:13.216 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:13.216 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:13.216 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:13.216 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:13.216 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:13.216 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:13.216 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:13.216 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:13.216 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:28:13.216 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:28:13.216 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:28:13.216 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:28:13.216 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:28:13.216 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:28:13.216 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:28:13.216 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:28:14.225 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:28:14.483 00:28:14.483 real 0m17.427s 00:28:14.483 user 0m4.366s 00:28:14.483 sys 0m8.954s 00:28:14.483 08:25:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:14.483 08:25:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:28:14.483 ************************************ 00:28:14.483 END TEST nvmf_identify_kernel_target 00:28:14.483 ************************************ 00:28:14.483 08:25:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:14.483 08:25:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:14.483 08:25:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:14.483 08:25:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:14.483 ************************************ 00:28:14.483 START TEST nvmf_auth_host 00:28:14.483 ************************************ 00:28:14.483 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:28:14.483 * Looking for test storage... 00:28:14.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:14.483 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:14.483 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:28:14.483 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:14.743 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:14.743 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:14.743 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:14.743 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:14.743 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:28:14.743 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:14.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.744 --rc genhtml_branch_coverage=1 00:28:14.744 --rc genhtml_function_coverage=1 00:28:14.744 --rc genhtml_legend=1 00:28:14.744 --rc geninfo_all_blocks=1 00:28:14.744 --rc geninfo_unexecuted_blocks=1 00:28:14.744 00:28:14.744 ' 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:14.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.744 --rc genhtml_branch_coverage=1 00:28:14.744 --rc genhtml_function_coverage=1 00:28:14.744 --rc genhtml_legend=1 00:28:14.744 --rc geninfo_all_blocks=1 00:28:14.744 --rc geninfo_unexecuted_blocks=1 00:28:14.744 00:28:14.744 ' 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:14.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.744 --rc genhtml_branch_coverage=1 00:28:14.744 --rc genhtml_function_coverage=1 00:28:14.744 --rc genhtml_legend=1 00:28:14.744 --rc geninfo_all_blocks=1 00:28:14.744 --rc geninfo_unexecuted_blocks=1 00:28:14.744 00:28:14.744 ' 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:14.744 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:14.744 --rc genhtml_branch_coverage=1 00:28:14.744 --rc genhtml_function_coverage=1 00:28:14.744 --rc genhtml_legend=1 00:28:14.744 --rc geninfo_all_blocks=1 00:28:14.744 --rc geninfo_unexecuted_blocks=1 00:28:14.744 00:28:14.744 ' 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:14.744 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@50 -- # : 0 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:28:14.745 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # remove_target_ns 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # xtrace_disable 00:28:14.745 08:25:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.321 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@131 -- # pci_devs=() 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@135 -- # net_devs=() 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@136 -- # e810=() 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@136 -- # local -ga e810 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@137 -- # x722=() 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@137 -- # local -ga x722 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@138 -- # mlx=() 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@138 -- # local -ga mlx 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:21.322 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:21.322 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:21.322 Found net devices under 0000:86:00.0: cvl_0_0 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:21.322 Found net devices under 0000:86:00.1: cvl_0_1 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # is_hw=yes 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@247 -- # create_target_ns 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@28 -- # local -g _dev 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # ips=() 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:21.322 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772161 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:21.323 10.0.0.1 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772162 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:21.323 10.0.0.2 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:21.323 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:21.323 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.366 ms 00:28:21.323 00:28:21.323 --- 10.0.0.1 ping statistics --- 00:28:21.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.323 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target0 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:28:21.323 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:21.323 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:28:21.323 00:28:21.323 --- 10.0.0.2 ping statistics --- 00:28:21.323 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:21.323 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair++ )) 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # return 0 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:21.323 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator1 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # return 1 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev= 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@160 -- # return 0 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target0 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target0 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev target1 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=target1 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # return 1 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev= 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@160 -- # return 0 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:28:21.324 ' 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # nvmfpid=1831317 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # waitforlisten 1831317 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1831317 ']' 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:21.324 08:25:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=459b69c957b2ba0eb24740276ff8ac98 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.qSW 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 459b69c957b2ba0eb24740276ff8ac98 0 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 459b69c957b2ba0eb24740276ff8ac98 0 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=459b69c957b2ba0eb24740276ff8ac98 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:28:21.584 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:28:21.843 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.qSW 00:28:21.843 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.qSW 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.qSW 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=f114213abe025ab3868ee1542bd3dd5c2501a55b316543e815769f7ce39ae17a 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.SXO 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key f114213abe025ab3868ee1542bd3dd5c2501a55b316543e815769f7ce39ae17a 3 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 f114213abe025ab3868ee1542bd3dd5c2501a55b316543e815769f7ce39ae17a 3 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=f114213abe025ab3868ee1542bd3dd5c2501a55b316543e815769f7ce39ae17a 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.SXO 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.SXO 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.SXO 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=f2ca428dbd3423f2434c6155d5c410bc058eac4163681b43 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.IgD 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key f2ca428dbd3423f2434c6155d5c410bc058eac4163681b43 0 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 f2ca428dbd3423f2434c6155d5c410bc058eac4163681b43 0 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=f2ca428dbd3423f2434c6155d5c410bc058eac4163681b43 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.IgD 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.IgD 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.IgD 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=fea2dcc16414673122e6b46fa4af3e6aed7a69fbd2fecfcb 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.IeH 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key fea2dcc16414673122e6b46fa4af3e6aed7a69fbd2fecfcb 2 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 fea2dcc16414673122e6b46fa4af3e6aed7a69fbd2fecfcb 2 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=fea2dcc16414673122e6b46fa4af3e6aed7a69fbd2fecfcb 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.IeH 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.IeH 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.IeH 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=459a6c6728932cdf93734ea20678b8bb 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.asX 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 459a6c6728932cdf93734ea20678b8bb 1 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 459a6c6728932cdf93734ea20678b8bb 1 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=459a6c6728932cdf93734ea20678b8bb 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:28:21.844 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.asX 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.asX 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.asX 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=1b519aa4f065f644d8c5013542a45610 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.ac5 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 1b519aa4f065f644d8c5013542a45610 1 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 1b519aa4f065f644d8c5013542a45610 1 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=1b519aa4f065f644d8c5013542a45610 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.ac5 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.ac5 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.ac5 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=e5188c78df86ed6c0cb4c689c83a859f1abb5cdaf2bc0d56 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.AId 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key e5188c78df86ed6c0cb4c689c83a859f1abb5cdaf2bc0d56 2 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 e5188c78df86ed6c0cb4c689c83a859f1abb5cdaf2bc0d56 2 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=e5188c78df86ed6c0cb4c689c83a859f1abb5cdaf2bc0d56 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.AId 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.AId 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.AId 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:28:22.104 08:25:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:28:22.104 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.104 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:28:22.104 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=c654f37a2c98135347bf5f0c294b0343 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.3u0 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key c654f37a2c98135347bf5f0c294b0343 0 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 c654f37a2c98135347bf5f0c294b0343 0 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=c654f37a2c98135347bf5f0c294b0343 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.3u0 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.3u0 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.3u0 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=e7649a7428a98718cb51522cd69a8f8bf93307953cf8a41979d3c4812be84578 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.3sY 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key e7649a7428a98718cb51522cd69a8f8bf93307953cf8a41979d3c4812be84578 3 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 e7649a7428a98718cb51522cd69a8f8bf93307953cf8a41979d3c4812be84578 3 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=e7649a7428a98718cb51522cd69a8f8bf93307953cf8a41979d3c4812be84578 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.3sY 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.3sY 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.3sY 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1831317 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 1831317 ']' 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.105 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:22.105 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.qSW 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.SXO ]] 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SXO 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.IgD 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.IeH ]] 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.IeH 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.asX 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.ac5 ]] 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.ac5 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.AId 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.3u0 ]] 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.3u0 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.3sY 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:22.365 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # local block nvme 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # modprobe nvmet 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:28:22.625 08:25:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:28:25.183 Waiting for block devices as requested 00:28:25.183 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:28:25.442 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:25.442 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:25.442 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:25.700 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:25.700 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:25.700 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:25.700 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:25.959 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:25.959 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:28:25.959 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:28:25.959 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:28:26.216 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:28:26.216 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:28:26.216 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:28:26.475 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:28:26.475 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:28:27.042 08:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:28:27.042 08:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:28:27.042 08:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:28:27.042 08:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:28:27.042 08:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:28:27.042 08:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:28:27.042 08:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:28:27.042 08:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:28:27.042 08:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:28:27.042 No valid GPT data, bailing 00:28:27.042 08:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:28:27.042 08:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:28:27.042 08:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:28:27.042 08:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:28:27.042 08:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:28:27.042 08:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:28:27.042 08:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:28:27.042 08:25:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:28:27.042 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:28:27.042 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # echo 1 00:28:27.042 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:28:27.042 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@471 -- # echo 1 00:28:27.042 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:28:27.042 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # echo tcp 00:28:27.042 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@475 -- # echo 4420 00:28:27.042 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # echo ipv4 00:28:27.042 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:28:27.042 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:28:27.300 00:28:27.300 Discovery Log Number of Records 2, Generation counter 2 00:28:27.300 =====Discovery Log Entry 0====== 00:28:27.300 trtype: tcp 00:28:27.300 adrfam: ipv4 00:28:27.300 subtype: current discovery subsystem 00:28:27.300 treq: not specified, sq flow control disable supported 00:28:27.300 portid: 1 00:28:27.300 trsvcid: 4420 00:28:27.300 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:28:27.300 traddr: 10.0.0.1 00:28:27.300 eflags: none 00:28:27.300 sectype: none 00:28:27.300 =====Discovery Log Entry 1====== 00:28:27.300 trtype: tcp 00:28:27.300 adrfam: ipv4 00:28:27.300 subtype: nvme subsystem 00:28:27.300 treq: not specified, sq flow control disable supported 00:28:27.300 portid: 1 00:28:27.300 trsvcid: 4420 00:28:27.300 subnqn: nqn.2024-02.io.spdk:cnode0 00:28:27.300 traddr: 10.0.0.1 00:28:27.300 eflags: none 00:28:27.300 sectype: none 00:28:27.300 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:28:27.300 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:28:27.300 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:28:27.300 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:27.300 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.300 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.300 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:27.300 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:27.300 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: ]] 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.301 nvme0n1 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.301 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: ]] 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:27.560 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.561 nvme0n1 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.561 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: ]] 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.820 nvme0n1 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:27.820 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: ]] 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:27.821 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:28.079 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:28.079 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:28.079 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:28.079 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:28.079 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.079 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.079 nvme0n1 00:28:28.079 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.079 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.079 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.079 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.079 08:25:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: ]] 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.079 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.338 nvme0n1 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:28.338 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:28.339 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:28.339 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:28.339 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:28.339 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:28.339 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:28.339 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:28.339 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:28.339 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:28.339 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.339 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.598 nvme0n1 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: ]] 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.598 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.858 nvme0n1 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: ]] 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:28.858 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:28.859 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:28.859 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:28.859 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:28.859 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:28.859 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:28.859 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:28.859 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:28.859 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:28.859 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:28.859 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:28.859 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:28.859 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:28.859 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:28.859 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:28.859 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.119 nvme0n1 00:28:29.119 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.119 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.119 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.119 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.119 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.119 08:25:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: ]] 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.119 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.379 nvme0n1 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: ]] 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.379 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.639 nvme0n1 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.639 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.899 nvme0n1 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: ]] 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:29.899 08:25:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.158 nvme0n1 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: ]] 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:30.158 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.159 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.417 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.417 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.417 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:30.417 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:30.417 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:30.417 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:30.417 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:30.417 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:30.417 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:30.417 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:30.417 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:30.417 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:30.417 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:30.417 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:30.417 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:30.417 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:30.417 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:30.417 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:30.417 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.417 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.675 nvme0n1 00:28:30.675 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.675 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.675 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.675 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.675 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.675 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.675 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.675 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.675 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.675 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.675 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.675 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.675 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:28:30.675 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.675 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.675 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:30.675 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:30.675 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: ]] 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.676 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.935 nvme0n1 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: ]] 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:30.935 08:25:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.194 nvme0n1 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:28:31.194 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.195 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.498 nvme0n1 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: ]] 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.498 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:31.757 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.757 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:31.757 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:31.757 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:31.757 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:31.757 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:31.757 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:31.757 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:31.757 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:31.757 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:31.757 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:31.757 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:31.757 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:31.757 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:31.757 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:31.757 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:31.757 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:31.757 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:31.757 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.757 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.016 nvme0n1 00:28:32.016 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.016 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.016 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.016 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.016 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: ]] 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.017 08:25:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.017 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.017 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.017 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:32.017 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:32.017 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:32.017 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:32.017 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:32.017 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:32.017 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:32.017 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:32.017 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:32.017 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:32.017 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:32.017 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:32.017 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:32.017 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:32.017 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:32.017 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:32.017 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.017 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.583 nvme0n1 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: ]] 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.583 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.841 nvme0n1 00:28:32.841 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.841 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:32.841 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:32.841 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.841 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:32.841 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.101 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.101 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.101 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.101 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.101 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.101 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.101 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:28:33.101 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: ]] 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.102 08:25:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.361 nvme0n1 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:33.361 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:33.362 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:33.621 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:33.621 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:33.621 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:33.621 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:33.621 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.621 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.879 nvme0n1 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: ]] 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:33.879 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:33.880 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:33.880 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:33.880 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.880 08:25:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.447 nvme0n1 00:28:34.447 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.447 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:34.447 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:34.447 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.447 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.447 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.447 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:34.447 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:34.447 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.447 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.447 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.447 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:34.447 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:28:34.447 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:34.447 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:34.447 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:34.447 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:34.447 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:34.447 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:34.448 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:34.448 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:34.448 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:34.448 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: ]] 00:28:34.448 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:34.448 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:28:34.448 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:34.448 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:34.448 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:34.448 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:34.448 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:34.448 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:34.448 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.448 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:34.707 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:34.707 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:34.707 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:34.707 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:34.707 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:34.707 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:34.707 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:34.707 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:34.707 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:34.707 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:34.707 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:34.707 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:34.707 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:34.707 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:34.707 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:34.707 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:34.707 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:34.707 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:34.707 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:34.707 08:25:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.276 nvme0n1 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: ]] 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.276 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.846 nvme0n1 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: ]] 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:35.846 08:25:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.415 nvme0n1 00:28:36.415 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.415 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:36.415 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:36.415 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.415 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.415 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:36.674 08:25:50 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.242 nvme0n1 00:28:37.242 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.242 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.242 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.242 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.242 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.242 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.242 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.242 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.242 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.242 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.242 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.242 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:37.242 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: ]] 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.243 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.503 nvme0n1 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: ]] 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.503 nvme0n1 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.503 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.762 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.762 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.762 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:37.762 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: ]] 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.763 nvme0n1 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:37.763 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: ]] 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.022 nvme0n1 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.022 08:25:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.022 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.022 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.023 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.023 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.023 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.023 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:38.282 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.283 nvme0n1 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: ]] 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.283 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.542 nvme0n1 00:28:38.542 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: ]] 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.543 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.802 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.802 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:38.802 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:38.802 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:38.802 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:38.802 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:38.802 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:38.802 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:38.802 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.803 nvme0n1 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: ]] 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:38.803 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.062 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.062 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.062 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:39.062 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:39.062 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:39.062 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:39.062 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:39.062 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:39.062 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:39.062 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:39.062 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:39.062 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:39.062 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:39.062 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:39.062 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:39.062 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:39.062 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:39.062 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:39.062 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.062 08:25:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.062 nvme0n1 00:28:39.062 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.062 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.062 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.062 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.062 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.062 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.062 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.062 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.062 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.062 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.062 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.062 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.062 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:28:39.062 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.062 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.062 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:39.062 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:39.063 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:39.063 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:39.063 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.063 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:39.063 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:39.063 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: ]] 00:28:39.063 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:39.321 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:28:39.321 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.321 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.321 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:39.321 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:39.321 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.321 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:39.321 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.321 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.321 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.321 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.321 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:39.321 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:39.321 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:39.321 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:39.321 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:39.321 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:39.321 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:39.321 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:39.321 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:39.322 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:39.322 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:39.322 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:39.322 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:39.322 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:39.322 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:39.322 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:39.322 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.322 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.322 nvme0n1 00:28:39.322 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.322 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.322 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.322 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.322 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.322 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.322 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.322 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.322 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.322 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.581 nvme0n1 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.581 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: ]] 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.840 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:39.841 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:39.841 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:39.841 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:39.841 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:39.841 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:39.841 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:39.841 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:39.841 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:39.841 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:39.841 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:39.841 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:39.841 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:39.841 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:39.841 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:39.841 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:39.841 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:39.841 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.841 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.099 nvme0n1 00:28:40.099 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.099 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.099 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.099 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.099 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.099 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.099 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.099 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.099 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.099 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.099 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.099 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.099 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:28:40.099 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.099 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.099 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:40.099 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:40.099 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: ]] 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.100 08:25:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.359 nvme0n1 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: ]] 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.359 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.360 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.360 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.360 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:40.360 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:40.360 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:40.360 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:40.360 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:40.360 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:40.360 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:40.360 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:40.360 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:40.360 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:40.360 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:40.360 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:40.360 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:40.360 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:40.360 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:40.360 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:40.360 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.360 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.619 nvme0n1 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: ]] 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:40.619 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:40.878 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:40.879 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:40.879 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.879 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.879 nvme0n1 00:28:40.879 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:40.879 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:40.879 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:40.879 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:40.879 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:40.879 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.138 08:25:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.397 nvme0n1 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.397 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: ]] 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.398 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.966 nvme0n1 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: ]] 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.966 08:25:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.225 nvme0n1 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: ]] 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:42.225 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:42.484 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:42.484 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:42.484 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:42.484 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:42.484 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.484 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.744 nvme0n1 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: ]] 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:42.744 08:25:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.312 nvme0n1 00:28:43.312 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.312 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.312 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.313 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.572 nvme0n1 00:28:43.572 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.572 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:43.572 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:43.572 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.572 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.572 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.572 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:43.572 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:43.572 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.572 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.572 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: ]] 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.831 08:25:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.399 nvme0n1 00:28:44.399 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.399 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.399 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.399 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.399 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.399 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.399 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.399 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.399 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.399 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.399 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.399 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.399 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:28:44.399 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.399 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: ]] 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.400 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.967 nvme0n1 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: ]] 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:44.967 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:44.968 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:44.968 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:44.968 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:44.968 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:44.968 08:25:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.534 nvme0n1 00:28:45.534 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.534 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:45.534 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:45.534 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.534 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.534 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.534 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:45.534 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:45.534 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.534 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: ]] 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:45.794 08:25:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.364 nvme0n1 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.364 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.932 nvme0n1 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: ]] 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:46.932 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:46.933 08:26:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.192 nvme0n1 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: ]] 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.192 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.451 nvme0n1 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: ]] 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.451 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.710 nvme0n1 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: ]] 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.710 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.969 nvme0n1 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:47.969 08:26:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.229 nvme0n1 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: ]] 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.229 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.488 nvme0n1 00:28:48.488 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.488 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.488 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.488 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.488 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.488 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.488 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.488 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.488 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.488 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.488 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.488 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.488 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: ]] 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.489 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.748 nvme0n1 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: ]] 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:28:48.748 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:48.749 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.008 nvme0n1 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: ]] 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.008 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.009 08:26:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.268 nvme0n1 00:28:49.268 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.268 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.268 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.268 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.268 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.268 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.268 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.268 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.268 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.268 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.268 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.268 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.268 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:28:49.268 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.269 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.528 nvme0n1 00:28:49.528 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.528 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.528 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.528 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.528 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.528 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.528 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.528 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: ]] 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.529 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.788 nvme0n1 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: ]] 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:49.788 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:49.789 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:49.789 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:49.789 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:49.789 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:49.789 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:49.789 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:49.789 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:49.789 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:49.789 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:49.789 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:49.789 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:49.789 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:49.789 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:49.789 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:49.789 08:26:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.048 nvme0n1 00:28:50.048 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.048 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.048 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.048 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.048 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.048 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: ]] 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.307 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.567 nvme0n1 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: ]] 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.567 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.827 nvme0n1 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:50.827 08:26:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.086 nvme0n1 00:28:51.086 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.086 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.086 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.086 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.086 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.086 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.345 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.345 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.345 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.345 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.345 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.345 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:51.345 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.345 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:28:51.345 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.345 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.345 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:51.345 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:51.345 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:51.345 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: ]] 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.346 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.605 nvme0n1 00:28:51.605 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: ]] 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:51.606 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:51.865 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:51.865 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:51.865 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:51.865 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:51.865 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.865 08:26:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.125 nvme0n1 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: ]] 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:52.125 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.126 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.695 nvme0n1 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: ]] 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.695 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.955 nvme0n1 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:28:52.955 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:53.215 08:26:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:53.215 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:53.215 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:53.215 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:53.215 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:53.215 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.215 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.475 nvme0n1 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NDU5YjY5Yzk1N2IyYmEwZWIyNDc0MDI3NmZmOGFjOTgkO3Y7: 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: ]] 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:ZjExNDIxM2FiZTAyNWFiMzg2OGVlMTU0MmJkM2RkNWMyNTAxYTU1YjMxNjU0M2U4MTU3NjlmN2NlMzlhZTE3YV9Ls34=: 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.475 08:26:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.042 nvme0n1 00:28:54.042 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.042 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.042 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.042 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.042 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.042 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: ]] 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:54.301 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:54.302 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:54.302 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:28:54.302 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.302 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.894 nvme0n1 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: ]] 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:54.894 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:54.895 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:54.895 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:54.895 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:54.895 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:54.895 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:54.895 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:54.895 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:54.895 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:54.895 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:54.895 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:54.895 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:54.895 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:54.895 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:54.895 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:54.895 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:54.895 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:54.895 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:54.895 08:26:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.531 nvme0n1 00:28:55.531 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.531 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZTUxODhjNzhkZjg2ZWQ2YzBjYjRjNjg5YzgzYTg1OWYxYWJiNWNkYWYyYmMwZDU2NSVQJg==: 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: ]] 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YzY1NGYzN2EyYzk4MTM1MzQ3YmY1ZjBjMjk0YjAzNDPae0kg: 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:55.532 08:26:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.109 nvme0n1 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZTc2NDlhNzQyOGE5ODcxOGNiNTE1MjJjZDY5YThmOGJmOTMzMDc5NTNjZjhhNDE5NzlkM2M0ODEyYmU4NDU3OLhxE30=: 00:28:56.109 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.110 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.047 nvme0n1 00:28:57.047 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.047 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.047 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:28:57.047 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: ]] 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.048 request: 00:28:57.048 { 00:28:57.048 "name": "nvme0", 00:28:57.048 "trtype": "tcp", 00:28:57.048 "traddr": "10.0.0.1", 00:28:57.048 "adrfam": "ipv4", 00:28:57.048 "trsvcid": "4420", 00:28:57.048 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:57.048 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:57.048 "prchk_reftag": false, 00:28:57.048 "prchk_guard": false, 00:28:57.048 "hdgst": false, 00:28:57.048 "ddgst": false, 00:28:57.048 "allow_unrecognized_csi": false, 00:28:57.048 "method": "bdev_nvme_attach_controller", 00:28:57.048 "req_id": 1 00:28:57.048 } 00:28:57.048 Got JSON-RPC error response 00:28:57.048 response: 00:28:57.048 { 00:28:57.048 "code": -5, 00:28:57.048 "message": "Input/output error" 00:28:57.048 } 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.048 08:26:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.048 request: 00:28:57.048 { 00:28:57.048 "name": "nvme0", 00:28:57.048 "trtype": "tcp", 00:28:57.048 "traddr": "10.0.0.1", 00:28:57.048 "adrfam": "ipv4", 00:28:57.048 "trsvcid": "4420", 00:28:57.048 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:57.049 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:57.049 "prchk_reftag": false, 00:28:57.049 "prchk_guard": false, 00:28:57.049 "hdgst": false, 00:28:57.049 "ddgst": false, 00:28:57.049 "dhchap_key": "key2", 00:28:57.049 "allow_unrecognized_csi": false, 00:28:57.049 "method": "bdev_nvme_attach_controller", 00:28:57.049 "req_id": 1 00:28:57.049 } 00:28:57.049 Got JSON-RPC error response 00:28:57.049 response: 00:28:57.049 { 00:28:57.049 "code": -5, 00:28:57.049 "message": "Input/output error" 00:28:57.049 } 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:57.049 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:57.308 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:57.308 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.308 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:57.308 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.308 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:57.308 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.308 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.308 request: 00:28:57.309 { 00:28:57.309 "name": "nvme0", 00:28:57.309 "trtype": "tcp", 00:28:57.309 "traddr": "10.0.0.1", 00:28:57.309 "adrfam": "ipv4", 00:28:57.309 "trsvcid": "4420", 00:28:57.309 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:28:57.309 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:28:57.309 "prchk_reftag": false, 00:28:57.309 "prchk_guard": false, 00:28:57.309 "hdgst": false, 00:28:57.309 "ddgst": false, 00:28:57.309 "dhchap_key": "key1", 00:28:57.309 "dhchap_ctrlr_key": "ckey2", 00:28:57.309 "allow_unrecognized_csi": false, 00:28:57.309 "method": "bdev_nvme_attach_controller", 00:28:57.309 "req_id": 1 00:28:57.309 } 00:28:57.309 Got JSON-RPC error response 00:28:57.309 response: 00:28:57.309 { 00:28:57.309 "code": -5, 00:28:57.309 "message": "Input/output error" 00:28:57.309 } 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.309 nvme0n1 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: ]] 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.309 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.568 request: 00:28:57.568 { 00:28:57.568 "name": "nvme0", 00:28:57.568 "dhchap_key": "key1", 00:28:57.568 "dhchap_ctrlr_key": "ckey2", 00:28:57.568 "method": "bdev_nvme_set_keys", 00:28:57.568 "req_id": 1 00:28:57.568 } 00:28:57.568 Got JSON-RPC error response 00:28:57.568 response: 00:28:57.568 { 00:28:57.568 "code": -13, 00:28:57.568 "message": "Permission denied" 00:28:57.568 } 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:57.568 08:26:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:58.504 08:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:58.504 08:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:58.504 08:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:58.504 08:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:58.763 08:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:58.763 08:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:28:58.763 08:26:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:28:59.700 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.700 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:28:59.700 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.700 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.700 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.700 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:28:59.700 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:28:59.700 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.700 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:59.700 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:59.700 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:28:59.700 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:59.700 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:59.700 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:59.700 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:59.700 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjJjYTQyOGRiZDM0MjNmMjQzNGM2MTU1ZDVjNDEwYmMwNThlYWM0MTYzNjgxYjQzU9qO1Q==: 00:28:59.700 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: ]] 00:28:59.700 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmVhMmRjYzE2NDE0NjczMTIyZTZiNDZmYTRhZjNlNmFlZDdhNjlmYmQyZmVjZmNid28egA==: 00:28:59.700 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:28:59.700 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:28:59.701 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:28:59.701 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:28:59.701 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:28:59.701 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:28:59.701 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # local dev=initiator0 00:28:59.701 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:28:59.701 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:28:59.701 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:28:59.701 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:28:59.701 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:59.701 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:59.701 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:28:59.701 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:28:59.701 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:28:59.701 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:28:59.701 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.701 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.961 nvme0n1 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NDU5YTZjNjcyODkzMmNkZjkzNzM0ZWEyMDY3OGI4YmIqdA3L: 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: ]] 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MWI1MTlhYTRmMDY1ZjY0NGQ4YzUwMTM1NDJhNDU2MTC5F3Cz: 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.961 request: 00:28:59.961 { 00:28:59.961 "name": "nvme0", 00:28:59.961 "dhchap_key": "key2", 00:28:59.961 "dhchap_ctrlr_key": "ckey1", 00:28:59.961 "method": "bdev_nvme_set_keys", 00:28:59.961 "req_id": 1 00:28:59.961 } 00:28:59.961 Got JSON-RPC error response 00:28:59.961 response: 00:28:59.961 { 00:28:59.961 "code": -13, 00:28:59.961 "message": "Permission denied" 00:28:59.961 } 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:28:59.961 08:26:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:29:00.897 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:29:00.897 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:29:00.897 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.897 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:00.897 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.156 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:29:01.156 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:29:01.156 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:29:01.156 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:29:01.156 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:01.156 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@99 -- # sync 00:29:01.156 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:01.156 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # set +e 00:29:01.156 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:01.156 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:01.156 rmmod nvme_tcp 00:29:01.156 rmmod nvme_fabrics 00:29:01.156 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:01.156 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # set -e 00:29:01.156 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # return 0 00:29:01.156 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # '[' -n 1831317 ']' 00:29:01.156 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@337 -- # killprocess 1831317 00:29:01.156 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 1831317 ']' 00:29:01.156 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 1831317 00:29:01.156 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:29:01.156 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:01.156 08:26:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1831317 00:29:01.156 08:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:01.156 08:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:01.156 08:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1831317' 00:29:01.156 killing process with pid 1831317 00:29:01.156 08:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 1831317 00:29:01.156 08:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 1831317 00:29:01.415 08:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:01.415 08:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # nvmf_fini 00:29:01.415 08:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@254 -- # local dev 00:29:01.415 08:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@257 -- # remove_target_ns 00:29:01.415 08:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:01.415 08:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:01.415 08:26:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:03.321 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@258 -- # delete_main_bridge 00:29:03.321 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:03.321 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@121 -- # return 0 00:29:03.321 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:03.321 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:03.321 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:03.321 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # _dev=0 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # dev_map=() 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@274 -- # iptr 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # iptables-save 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@548 -- # iptables-restore 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # echo 0 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:29:03.322 08:26:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:06.615 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:06.615 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:06.615 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:06.615 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:06.615 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:06.615 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:06.615 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:06.615 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:06.615 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:06.615 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:06.615 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:06.615 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:06.615 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:06.615 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:06.615 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:06.615 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:07.995 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:07.995 08:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.qSW /tmp/spdk.key-null.IgD /tmp/spdk.key-sha256.asX /tmp/spdk.key-sha384.AId /tmp/spdk.key-sha512.3sY /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:29:07.995 08:26:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:10.531 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:29:10.531 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:29:10.531 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:29:10.531 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:29:10.531 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:29:10.531 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:29:10.531 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:29:10.531 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:29:10.531 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:29:10.531 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:29:10.531 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:29:10.531 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:29:10.531 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:29:10.531 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:29:10.531 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:29:10.790 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:29:10.790 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:29:10.790 00:29:10.790 real 0m56.331s 00:29:10.790 user 0m50.755s 00:29:10.790 sys 0m13.446s 00:29:10.790 08:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:10.790 08:26:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.790 ************************************ 00:29:10.790 END TEST nvmf_auth_host 00:29:10.790 ************************************ 00:29:10.790 08:26:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:29:10.790 08:26:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:10.790 08:26:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:10.790 08:26:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:10.790 08:26:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:10.790 ************************************ 00:29:10.790 START TEST nvmf_digest 00:29:10.790 ************************************ 00:29:10.790 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:29:11.051 * Looking for test storage... 00:29:11.051 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:11.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.051 --rc genhtml_branch_coverage=1 00:29:11.051 --rc genhtml_function_coverage=1 00:29:11.051 --rc genhtml_legend=1 00:29:11.051 --rc geninfo_all_blocks=1 00:29:11.051 --rc geninfo_unexecuted_blocks=1 00:29:11.051 00:29:11.051 ' 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:11.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.051 --rc genhtml_branch_coverage=1 00:29:11.051 --rc genhtml_function_coverage=1 00:29:11.051 --rc genhtml_legend=1 00:29:11.051 --rc geninfo_all_blocks=1 00:29:11.051 --rc geninfo_unexecuted_blocks=1 00:29:11.051 00:29:11.051 ' 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:11.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.051 --rc genhtml_branch_coverage=1 00:29:11.051 --rc genhtml_function_coverage=1 00:29:11.051 --rc genhtml_legend=1 00:29:11.051 --rc geninfo_all_blocks=1 00:29:11.051 --rc geninfo_unexecuted_blocks=1 00:29:11.051 00:29:11.051 ' 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:11.051 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:11.051 --rc genhtml_branch_coverage=1 00:29:11.051 --rc genhtml_function_coverage=1 00:29:11.051 --rc genhtml_legend=1 00:29:11.051 --rc geninfo_all_blocks=1 00:29:11.051 --rc geninfo_unexecuted_blocks=1 00:29:11.051 00:29:11.051 ' 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:11.051 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@50 -- # : 0 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:11.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # remove_target_ns 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # xtrace_disable 00:29:11.052 08:26:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@131 -- # pci_devs=() 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@135 -- # net_devs=() 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@136 -- # e810=() 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@136 -- # local -ga e810 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@137 -- # x722=() 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@137 -- # local -ga x722 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@138 -- # mlx=() 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@138 -- # local -ga mlx 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:17.625 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:17.625 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:17.625 Found net devices under 0000:86:00.0: cvl_0_0 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:17.625 Found net devices under 0000:86:00.1: cvl_0_1 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # is_hw=yes 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@247 -- # create_target_ns 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@28 -- # local -g _dev 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # ips=() 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772161 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:17.625 10.0.0.1 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772162 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:17.625 10.0.0.2 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:29:17.625 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:17.626 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:17.626 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.266 ms 00:29:17.626 00:29:17.626 --- 10.0.0.1 ping statistics --- 00:29:17.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.626 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target0 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:29:17.626 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:17.626 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.144 ms 00:29:17.626 00:29:17.626 --- 10.0.0.2 ping statistics --- 00:29:17.626 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:17.626 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair++ )) 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # return 0 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=initiator1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # return 1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev= 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@160 -- # return 0 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target0 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # get_net_dev target1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # local dev=target1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # return 1 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@159 -- # dev= 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@160 -- # return 0 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:29:17.626 ' 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:29:17.626 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:29:17.627 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:17.627 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:17.627 08:26:30 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:17.627 ************************************ 00:29:17.627 START TEST nvmf_digest_clean 00:29:17.627 ************************************ 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@328 -- # nvmfpid=1845587 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@329 -- # waitforlisten 1845587 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1845587 ']' 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:17.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:17.627 [2024-11-20 08:26:31.073446] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:29:17.627 [2024-11-20 08:26:31.073486] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:17.627 [2024-11-20 08:26:31.132518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.627 [2024-11-20 08:26:31.172812] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:17.627 [2024-11-20 08:26:31.172846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:17.627 [2024-11-20 08:26:31.172853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:17.627 [2024-11-20 08:26:31.172859] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:17.627 [2024-11-20 08:26:31.172865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:17.627 [2024-11-20 08:26:31.173434] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:17.627 null0 00:29:17.627 [2024-11-20 08:26:31.335336] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:17.627 [2024-11-20 08:26:31.359551] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1845606 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1845606 /var/tmp/bperf.sock 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1845606 ']' 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:17.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:17.627 [2024-11-20 08:26:31.413830] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:29:17.627 [2024-11-20 08:26:31.413872] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1845606 ] 00:29:17.627 [2024-11-20 08:26:31.488104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:17.627 [2024-11-20 08:26:31.529933] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:17.627 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:17.886 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:17.886 08:26:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:18.456 nvme0n1 00:29:18.456 08:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:18.456 08:26:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:18.456 Running I/O for 2 seconds... 00:29:20.330 25621.00 IOPS, 100.08 MiB/s [2024-11-20T07:26:34.358Z] 25939.50 IOPS, 101.33 MiB/s 00:29:20.330 Latency(us) 00:29:20.330 [2024-11-20T07:26:34.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.330 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:20.330 nvme0n1 : 2.00 25952.63 101.38 0.00 0.00 4927.99 2527.82 11359.57 00:29:20.330 [2024-11-20T07:26:34.358Z] =================================================================================================================== 00:29:20.330 [2024-11-20T07:26:34.358Z] Total : 25952.63 101.38 0.00 0.00 4927.99 2527.82 11359.57 00:29:20.330 { 00:29:20.330 "results": [ 00:29:20.330 { 00:29:20.330 "job": "nvme0n1", 00:29:20.330 "core_mask": "0x2", 00:29:20.330 "workload": "randread", 00:29:20.330 "status": "finished", 00:29:20.330 "queue_depth": 128, 00:29:20.330 "io_size": 4096, 00:29:20.330 "runtime": 2.00392, 00:29:20.330 "iops": 25952.632839634316, 00:29:20.330 "mibps": 101.37747202982155, 00:29:20.330 "io_failed": 0, 00:29:20.330 "io_timeout": 0, 00:29:20.330 "avg_latency_us": 4927.9875509798585, 00:29:20.330 "min_latency_us": 2527.8171428571427, 00:29:20.330 "max_latency_us": 11359.573333333334 00:29:20.330 } 00:29:20.330 ], 00:29:20.330 "core_count": 1 00:29:20.330 } 00:29:20.330 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:20.330 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:20.330 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:20.330 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:20.331 | select(.opcode=="crc32c") 00:29:20.331 | "\(.module_name) \(.executed)"' 00:29:20.331 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:20.590 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:20.590 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:20.590 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:20.590 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:20.590 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1845606 00:29:20.590 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1845606 ']' 00:29:20.590 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1845606 00:29:20.590 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:20.590 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:20.590 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1845606 00:29:20.590 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:20.590 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:20.590 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1845606' 00:29:20.590 killing process with pid 1845606 00:29:20.590 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1845606 00:29:20.590 Received shutdown signal, test time was about 2.000000 seconds 00:29:20.590 00:29:20.590 Latency(us) 00:29:20.590 [2024-11-20T07:26:34.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:20.590 [2024-11-20T07:26:34.618Z] =================================================================================================================== 00:29:20.590 [2024-11-20T07:26:34.618Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:20.590 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1845606 00:29:20.850 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:29:20.850 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:20.850 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:20.850 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:29:20.850 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:20.850 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:20.850 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:20.850 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1846079 00:29:20.850 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1846079 /var/tmp/bperf.sock 00:29:20.850 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:20.850 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1846079 ']' 00:29:20.850 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:20.850 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:20.850 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:20.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:20.850 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:20.850 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:20.850 [2024-11-20 08:26:34.753831] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:29:20.850 [2024-11-20 08:26:34.753882] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1846079 ] 00:29:20.850 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:20.850 Zero copy mechanism will not be used. 00:29:20.850 [2024-11-20 08:26:34.828409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:20.850 [2024-11-20 08:26:34.867193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:21.109 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:21.109 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:21.109 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:21.109 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:21.109 08:26:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:21.369 08:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:21.369 08:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:21.628 nvme0n1 00:29:21.628 08:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:21.628 08:26:35 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:21.628 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:21.628 Zero copy mechanism will not be used. 00:29:21.628 Running I/O for 2 seconds... 00:29:23.943 6126.00 IOPS, 765.75 MiB/s [2024-11-20T07:26:37.971Z] 5670.00 IOPS, 708.75 MiB/s 00:29:23.943 Latency(us) 00:29:23.943 [2024-11-20T07:26:37.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.943 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:23.943 nvme0n1 : 2.00 5672.05 709.01 0.00 0.00 2818.30 663.16 6023.07 00:29:23.943 [2024-11-20T07:26:37.971Z] =================================================================================================================== 00:29:23.943 [2024-11-20T07:26:37.971Z] Total : 5672.05 709.01 0.00 0.00 2818.30 663.16 6023.07 00:29:23.943 { 00:29:23.943 "results": [ 00:29:23.943 { 00:29:23.943 "job": "nvme0n1", 00:29:23.943 "core_mask": "0x2", 00:29:23.943 "workload": "randread", 00:29:23.943 "status": "finished", 00:29:23.943 "queue_depth": 16, 00:29:23.943 "io_size": 131072, 00:29:23.943 "runtime": 2.002098, 00:29:23.943 "iops": 5672.0500195295135, 00:29:23.943 "mibps": 709.0062524411892, 00:29:23.943 "io_failed": 0, 00:29:23.943 "io_timeout": 0, 00:29:23.943 "avg_latency_us": 2818.298113017662, 00:29:23.943 "min_latency_us": 663.1619047619048, 00:29:23.943 "max_latency_us": 6023.070476190476 00:29:23.943 } 00:29:23.943 ], 00:29:23.943 "core_count": 1 00:29:23.943 } 00:29:23.943 08:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:23.943 08:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:23.943 08:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:23.943 08:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:23.943 | select(.opcode=="crc32c") 00:29:23.943 | "\(.module_name) \(.executed)"' 00:29:23.943 08:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:23.943 08:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:23.943 08:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:23.943 08:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:23.943 08:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:23.943 08:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1846079 00:29:23.943 08:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1846079 ']' 00:29:23.943 08:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1846079 00:29:23.943 08:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:23.943 08:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:23.943 08:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1846079 00:29:23.943 08:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:23.943 08:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:23.943 08:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1846079' 00:29:23.943 killing process with pid 1846079 00:29:23.943 08:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1846079 00:29:23.943 Received shutdown signal, test time was about 2.000000 seconds 00:29:23.943 00:29:23.943 Latency(us) 00:29:23.943 [2024-11-20T07:26:37.971Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:23.943 [2024-11-20T07:26:37.971Z] =================================================================================================================== 00:29:23.943 [2024-11-20T07:26:37.971Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:23.943 08:26:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1846079 00:29:24.202 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:29:24.202 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:24.202 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:24.202 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:24.202 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:29:24.202 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:29:24.202 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:24.202 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1846771 00:29:24.202 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1846771 /var/tmp/bperf.sock 00:29:24.202 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:29:24.202 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1846771 ']' 00:29:24.202 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:24.202 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:24.202 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:24.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:24.202 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:24.202 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:24.202 [2024-11-20 08:26:38.086355] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:29:24.202 [2024-11-20 08:26:38.086401] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1846771 ] 00:29:24.202 [2024-11-20 08:26:38.160506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:24.202 [2024-11-20 08:26:38.202223] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:24.462 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:24.462 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:24.462 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:24.462 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:24.462 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:24.722 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:24.722 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:24.981 nvme0n1 00:29:24.981 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:24.981 08:26:38 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:24.981 Running I/O for 2 seconds... 00:29:27.296 28811.00 IOPS, 112.54 MiB/s [2024-11-20T07:26:41.324Z] 28524.00 IOPS, 111.42 MiB/s 00:29:27.296 Latency(us) 00:29:27.296 [2024-11-20T07:26:41.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.296 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:27.296 nvme0n1 : 2.00 28521.25 111.41 0.00 0.00 4482.23 1778.83 9050.21 00:29:27.296 [2024-11-20T07:26:41.324Z] =================================================================================================================== 00:29:27.296 [2024-11-20T07:26:41.324Z] Total : 28521.25 111.41 0.00 0.00 4482.23 1778.83 9050.21 00:29:27.296 { 00:29:27.296 "results": [ 00:29:27.296 { 00:29:27.296 "job": "nvme0n1", 00:29:27.296 "core_mask": "0x2", 00:29:27.296 "workload": "randwrite", 00:29:27.296 "status": "finished", 00:29:27.296 "queue_depth": 128, 00:29:27.296 "io_size": 4096, 00:29:27.296 "runtime": 2.003033, 00:29:27.297 "iops": 28521.2475281236, 00:29:27.297 "mibps": 111.41112315673281, 00:29:27.297 "io_failed": 0, 00:29:27.297 "io_timeout": 0, 00:29:27.297 "avg_latency_us": 4482.2254911482705, 00:29:27.297 "min_latency_us": 1778.8342857142857, 00:29:27.297 "max_latency_us": 9050.209523809524 00:29:27.297 } 00:29:27.297 ], 00:29:27.297 "core_count": 1 00:29:27.297 } 00:29:27.297 08:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:27.297 08:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:27.297 08:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:27.297 08:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:27.297 | select(.opcode=="crc32c") 00:29:27.297 | "\(.module_name) \(.executed)"' 00:29:27.297 08:26:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:27.297 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:27.297 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:27.297 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:27.297 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:27.297 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1846771 00:29:27.297 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1846771 ']' 00:29:27.297 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1846771 00:29:27.297 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:27.297 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:27.297 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1846771 00:29:27.297 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:27.297 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:27.297 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1846771' 00:29:27.297 killing process with pid 1846771 00:29:27.297 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1846771 00:29:27.297 Received shutdown signal, test time was about 2.000000 seconds 00:29:27.297 00:29:27.297 Latency(us) 00:29:27.297 [2024-11-20T07:26:41.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:27.297 [2024-11-20T07:26:41.325Z] =================================================================================================================== 00:29:27.297 [2024-11-20T07:26:41.325Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:27.297 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1846771 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1847242 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1847242 /var/tmp/bperf.sock 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 1847242 ']' 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:27.557 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:27.557 [2024-11-20 08:26:41.371293] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:29:27.557 [2024-11-20 08:26:41.371342] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1847242 ] 00:29:27.557 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:27.557 Zero copy mechanism will not be used. 00:29:27.557 [2024-11-20 08:26:41.443881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:27.557 [2024-11-20 08:26:41.480721] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:29:27.557 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:29:27.816 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:27.816 08:26:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:28.076 nvme0n1 00:29:28.335 08:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:29:28.335 08:26:42 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:28.335 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:28.335 Zero copy mechanism will not be used. 00:29:28.335 Running I/O for 2 seconds... 00:29:30.209 6512.00 IOPS, 814.00 MiB/s [2024-11-20T07:26:44.237Z] 6484.50 IOPS, 810.56 MiB/s 00:29:30.209 Latency(us) 00:29:30.209 [2024-11-20T07:26:44.237Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.209 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:30.209 nvme0n1 : 2.00 6481.03 810.13 0.00 0.00 2464.29 1958.28 10236.10 00:29:30.209 [2024-11-20T07:26:44.237Z] =================================================================================================================== 00:29:30.209 [2024-11-20T07:26:44.237Z] Total : 6481.03 810.13 0.00 0.00 2464.29 1958.28 10236.10 00:29:30.209 { 00:29:30.209 "results": [ 00:29:30.209 { 00:29:30.209 "job": "nvme0n1", 00:29:30.209 "core_mask": "0x2", 00:29:30.209 "workload": "randwrite", 00:29:30.209 "status": "finished", 00:29:30.209 "queue_depth": 16, 00:29:30.209 "io_size": 131072, 00:29:30.209 "runtime": 2.003541, 00:29:30.209 "iops": 6481.025344627337, 00:29:30.209 "mibps": 810.1281680784172, 00:29:30.209 "io_failed": 0, 00:29:30.209 "io_timeout": 0, 00:29:30.209 "avg_latency_us": 2464.291853237252, 00:29:30.209 "min_latency_us": 1958.2780952380951, 00:29:30.209 "max_latency_us": 10236.099047619047 00:29:30.209 } 00:29:30.209 ], 00:29:30.209 "core_count": 1 00:29:30.209 } 00:29:30.468 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:29:30.468 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:29:30.468 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:29:30.468 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:29:30.468 | select(.opcode=="crc32c") 00:29:30.468 | "\(.module_name) \(.executed)"' 00:29:30.468 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:29:30.468 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:29:30.468 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:29:30.468 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:29:30.468 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:29:30.468 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1847242 00:29:30.468 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1847242 ']' 00:29:30.468 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1847242 00:29:30.468 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:30.468 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:30.469 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1847242 00:29:30.728 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:30.728 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:30.728 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1847242' 00:29:30.728 killing process with pid 1847242 00:29:30.728 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1847242 00:29:30.728 Received shutdown signal, test time was about 2.000000 seconds 00:29:30.728 00:29:30.728 Latency(us) 00:29:30.728 [2024-11-20T07:26:44.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:30.728 [2024-11-20T07:26:44.756Z] =================================================================================================================== 00:29:30.728 [2024-11-20T07:26:44.756Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:30.728 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1847242 00:29:30.728 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1845587 00:29:30.728 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 1845587 ']' 00:29:30.728 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 1845587 00:29:30.728 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:29:30.728 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:30.728 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1845587 00:29:30.728 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:30.728 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:30.729 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1845587' 00:29:30.729 killing process with pid 1845587 00:29:30.729 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 1845587 00:29:30.729 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 1845587 00:29:30.988 00:29:30.988 real 0m13.827s 00:29:30.988 user 0m26.389s 00:29:30.988 sys 0m4.618s 00:29:30.988 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:30.988 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:29:30.988 ************************************ 00:29:30.988 END TEST nvmf_digest_clean 00:29:30.988 ************************************ 00:29:30.988 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:29:30.988 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:30.988 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:30.988 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:30.988 ************************************ 00:29:30.988 START TEST nvmf_digest_error 00:29:30.988 ************************************ 00:29:30.988 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:29:30.988 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:29:30.988 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:30.988 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:30.988 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:30.988 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@328 -- # nvmfpid=1847796 00:29:30.988 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@329 -- # waitforlisten 1847796 00:29:30.988 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:29:30.988 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1847796 ']' 00:29:30.988 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.988 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.988 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.988 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.988 08:26:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:30.988 [2024-11-20 08:26:44.977115] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:29:30.988 [2024-11-20 08:26:44.977157] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.248 [2024-11-20 08:26:45.038825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.248 [2024-11-20 08:26:45.081725] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.248 [2024-11-20 08:26:45.081759] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.248 [2024-11-20 08:26:45.081766] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.248 [2024-11-20 08:26:45.081772] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.248 [2024-11-20 08:26:45.081777] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.248 [2024-11-20 08:26:45.082358] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.248 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.248 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:31.248 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:31.248 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:31.248 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:31.248 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.248 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:29:31.248 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.248 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:31.248 [2024-11-20 08:26:45.158825] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:29:31.248 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.248 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:29:31.248 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:29:31.248 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.248 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:31.248 null0 00:29:31.248 [2024-11-20 08:26:45.254281] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.508 [2024-11-20 08:26:45.278490] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.508 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.508 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:29:31.508 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:31.508 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:31.508 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:31.508 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:31.508 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1847980 00:29:31.508 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1847980 /var/tmp/bperf.sock 00:29:31.508 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:29:31.508 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1847980 ']' 00:29:31.508 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:31.508 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:31.508 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:31.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:31.508 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:31.508 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:31.508 [2024-11-20 08:26:45.328447] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:29:31.508 [2024-11-20 08:26:45.328491] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1847980 ] 00:29:31.508 [2024-11-20 08:26:45.389565] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.508 [2024-11-20 08:26:45.433713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.508 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.508 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:31.508 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:31.508 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:31.768 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:31.768 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.768 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:31.768 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.768 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:31.768 08:26:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:32.337 nvme0n1 00:29:32.337 08:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:32.337 08:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.337 08:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:32.337 08:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.337 08:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:32.337 08:26:46 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:32.337 Running I/O for 2 seconds... 00:29:32.337 [2024-11-20 08:26:46.190065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.337 [2024-11-20 08:26:46.190100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:21500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.337 [2024-11-20 08:26:46.190111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.337 [2024-11-20 08:26:46.200608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.337 [2024-11-20 08:26:46.200634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:7821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.337 [2024-11-20 08:26:46.200643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.337 [2024-11-20 08:26:46.211123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.338 [2024-11-20 08:26:46.211146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 08:26:46.211155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 08:26:46.219699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.338 [2024-11-20 08:26:46.219721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:16167 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 08:26:46.219729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 08:26:46.231999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.338 [2024-11-20 08:26:46.232024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:7663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 08:26:46.232035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 08:26:46.242193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.338 [2024-11-20 08:26:46.242224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:8347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 08:26:46.242232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 08:26:46.250689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.338 [2024-11-20 08:26:46.250712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:16622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 08:26:46.250720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 08:26:46.262581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.338 [2024-11-20 08:26:46.262605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:20994 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 08:26:46.262617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 08:26:46.274452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.338 [2024-11-20 08:26:46.274476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:14017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 08:26:46.274486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 08:26:46.282542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.338 [2024-11-20 08:26:46.282564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19558 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 08:26:46.282572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 08:26:46.294680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.338 [2024-11-20 08:26:46.294704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:9318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 08:26:46.294712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 08:26:46.304115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.338 [2024-11-20 08:26:46.304138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:5172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 08:26:46.304146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 08:26:46.314341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.338 [2024-11-20 08:26:46.314364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12312 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 08:26:46.314372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 08:26:46.323787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.338 [2024-11-20 08:26:46.323809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 08:26:46.323817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 08:26:46.333073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.338 [2024-11-20 08:26:46.333095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:19772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 08:26:46.333103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 08:26:46.341192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.338 [2024-11-20 08:26:46.341220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19545 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 08:26:46.341229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.338 [2024-11-20 08:26:46.350549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.338 [2024-11-20 08:26:46.350575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:13133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.338 [2024-11-20 08:26:46.350583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.598 [2024-11-20 08:26:46.360859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.598 [2024-11-20 08:26:46.360882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:17770 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.598 [2024-11-20 08:26:46.360890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.598 [2024-11-20 08:26:46.369650] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.598 [2024-11-20 08:26:46.369672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:6107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.598 [2024-11-20 08:26:46.369680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.598 [2024-11-20 08:26:46.378128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.598 [2024-11-20 08:26:46.378149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:17724 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.598 [2024-11-20 08:26:46.378157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.598 [2024-11-20 08:26:46.388226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.598 [2024-11-20 08:26:46.388248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.598 [2024-11-20 08:26:46.388256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.598 [2024-11-20 08:26:46.399755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.598 [2024-11-20 08:26:46.399777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:6076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.598 [2024-11-20 08:26:46.399785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.408071] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.408093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.408101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.419533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.419555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.419562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.429416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.429438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.429446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.438801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.438823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:17912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.438833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.448225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.448246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1449 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.448255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.458647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.458669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.458678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.467173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.467195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:16088 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.467209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.476356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.476380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.476388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.486238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.486259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:6439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.486268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.496237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.496258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2336 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.496267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.505501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.505524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2733 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.505532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.514957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.514980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24939 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.514991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.523527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.523548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.523557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.533355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.533376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.533384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.542093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.542114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:1125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.542123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.551301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.551323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:4409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.551332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.561154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.561176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:2876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.561184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.572733] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.572755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.572764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.585363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.585386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10615 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.585394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.593574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.593594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:15902 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.593603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.605014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.605041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:1092 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.605050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.599 [2024-11-20 08:26:46.615580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.599 [2024-11-20 08:26:46.615602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:20053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.599 [2024-11-20 08:26:46.615610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.859 [2024-11-20 08:26:46.624629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.859 [2024-11-20 08:26:46.624650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:23819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.859 [2024-11-20 08:26:46.624659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.859 [2024-11-20 08:26:46.636435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.859 [2024-11-20 08:26:46.636456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:2399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.859 [2024-11-20 08:26:46.636464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.859 [2024-11-20 08:26:46.644234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.859 [2024-11-20 08:26:46.644255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:15714 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.644263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.656080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.656101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:6767 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.656109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.667363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.667385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.667394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.675638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.675660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:11630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.675669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.685625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.685646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.685654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.696999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.697020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.697028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.705445] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.705466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:20678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.705474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.716183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.716209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3055 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.716218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.727386] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.727408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:85 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.727417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.736729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.736750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:22190 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.736758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.748769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.748790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:15789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.748799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.760835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.760856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.760864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.772735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.772757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:17943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.772765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.784939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.784963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:6139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.784971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.792997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.793019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:12947 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.793027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.805076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.805098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:23238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.805106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.817740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.817761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.817770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.825816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.825836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.825844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.836524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.836545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17681 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.836553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.847014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.847035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:24660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.847043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.855551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.855572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:5676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.855580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.865467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.860 [2024-11-20 08:26:46.865487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:21301 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.860 [2024-11-20 08:26:46.865495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:32.860 [2024-11-20 08:26:46.875121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:32.861 [2024-11-20 08:26:46.875142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:4206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:32.861 [2024-11-20 08:26:46.875150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.121 [2024-11-20 08:26:46.886348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.121 [2024-11-20 08:26:46.886369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:11968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.121 [2024-11-20 08:26:46.886378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.121 [2024-11-20 08:26:46.898518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.121 [2024-11-20 08:26:46.898539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:1678 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.121 [2024-11-20 08:26:46.898547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.121 [2024-11-20 08:26:46.907111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.121 [2024-11-20 08:26:46.907131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:8556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.121 [2024-11-20 08:26:46.907139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.121 [2024-11-20 08:26:46.919819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.121 [2024-11-20 08:26:46.919841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.121 [2024-11-20 08:26:46.919849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.121 [2024-11-20 08:26:46.928436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.121 [2024-11-20 08:26:46.928456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:2466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.121 [2024-11-20 08:26:46.928464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.121 [2024-11-20 08:26:46.940027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.121 [2024-11-20 08:26:46.940048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:22787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.121 [2024-11-20 08:26:46.940056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.121 [2024-11-20 08:26:46.951565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.121 [2024-11-20 08:26:46.951587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:16177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.121 [2024-11-20 08:26:46.951595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.121 [2024-11-20 08:26:46.963928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.121 [2024-11-20 08:26:46.963949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.121 [2024-11-20 08:26:46.963961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.121 [2024-11-20 08:26:46.972100] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.121 [2024-11-20 08:26:46.972121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:18361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.121 [2024-11-20 08:26:46.972129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.121 [2024-11-20 08:26:46.983660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.121 [2024-11-20 08:26:46.983681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17996 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.121 [2024-11-20 08:26:46.983690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.121 [2024-11-20 08:26:46.993712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.121 [2024-11-20 08:26:46.993733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:4640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.121 [2024-11-20 08:26:46.993741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.121 [2024-11-20 08:26:47.004444] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.122 [2024-11-20 08:26:47.004465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:10198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.122 [2024-11-20 08:26:47.004473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.122 [2024-11-20 08:26:47.012940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.122 [2024-11-20 08:26:47.012960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.122 [2024-11-20 08:26:47.012968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.122 [2024-11-20 08:26:47.024911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.122 [2024-11-20 08:26:47.024933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.122 [2024-11-20 08:26:47.024941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.122 [2024-11-20 08:26:47.035705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.122 [2024-11-20 08:26:47.035727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.122 [2024-11-20 08:26:47.035735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.122 [2024-11-20 08:26:47.045267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.122 [2024-11-20 08:26:47.045289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:11064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.122 [2024-11-20 08:26:47.045297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.122 [2024-11-20 08:26:47.054332] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.122 [2024-11-20 08:26:47.054357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.122 [2024-11-20 08:26:47.054365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.122 [2024-11-20 08:26:47.063944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.122 [2024-11-20 08:26:47.063965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.122 [2024-11-20 08:26:47.063973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.122 [2024-11-20 08:26:47.072409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.122 [2024-11-20 08:26:47.072431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:6366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.122 [2024-11-20 08:26:47.072439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.122 [2024-11-20 08:26:47.085019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.122 [2024-11-20 08:26:47.085041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:5935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.122 [2024-11-20 08:26:47.085049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.122 [2024-11-20 08:26:47.095948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.122 [2024-11-20 08:26:47.095969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:15039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.122 [2024-11-20 08:26:47.095977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.122 [2024-11-20 08:26:47.109494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.122 [2024-11-20 08:26:47.109514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.122 [2024-11-20 08:26:47.109522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.122 [2024-11-20 08:26:47.117538] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.122 [2024-11-20 08:26:47.117559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15572 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.122 [2024-11-20 08:26:47.117568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.122 [2024-11-20 08:26:47.128745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.122 [2024-11-20 08:26:47.128766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.122 [2024-11-20 08:26:47.128774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.122 [2024-11-20 08:26:47.140821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.122 [2024-11-20 08:26:47.140842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:16150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.122 [2024-11-20 08:26:47.140850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.382 [2024-11-20 08:26:47.150701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.382 [2024-11-20 08:26:47.150722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.382 [2024-11-20 08:26:47.150730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.382 [2024-11-20 08:26:47.159312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.382 [2024-11-20 08:26:47.159333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:7747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.382 [2024-11-20 08:26:47.159341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.382 [2024-11-20 08:26:47.170582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.382 [2024-11-20 08:26:47.170602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.382 [2024-11-20 08:26:47.170611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.382 24963.00 IOPS, 97.51 MiB/s [2024-11-20T07:26:47.410Z] [2024-11-20 08:26:47.180884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.382 [2024-11-20 08:26:47.180905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:4679 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.382 [2024-11-20 08:26:47.180913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.382 [2024-11-20 08:26:47.189531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.382 [2024-11-20 08:26:47.189552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:12654 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.382 [2024-11-20 08:26:47.189560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.382 [2024-11-20 08:26:47.200307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.382 [2024-11-20 08:26:47.200327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.382 [2024-11-20 08:26:47.200336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.382 [2024-11-20 08:26:47.208625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.382 [2024-11-20 08:26:47.208646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.382 [2024-11-20 08:26:47.208654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.383 [2024-11-20 08:26:47.220191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.383 [2024-11-20 08:26:47.220216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:2099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.383 [2024-11-20 08:26:47.220225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.383 [2024-11-20 08:26:47.233013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.383 [2024-11-20 08:26:47.233038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.383 [2024-11-20 08:26:47.233047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.383 [2024-11-20 08:26:47.241359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.383 [2024-11-20 08:26:47.241380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.383 [2024-11-20 08:26:47.241388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.383 [2024-11-20 08:26:47.253161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.383 [2024-11-20 08:26:47.253182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.383 [2024-11-20 08:26:47.253191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.383 [2024-11-20 08:26:47.265771] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.383 [2024-11-20 08:26:47.265795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:12783 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.383 [2024-11-20 08:26:47.265804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.383 [2024-11-20 08:26:47.275512] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.383 [2024-11-20 08:26:47.275533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:5542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.383 [2024-11-20 08:26:47.275541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.383 [2024-11-20 08:26:47.283899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.383 [2024-11-20 08:26:47.283920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:4551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.383 [2024-11-20 08:26:47.283928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.383 [2024-11-20 08:26:47.294216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.383 [2024-11-20 08:26:47.294238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.383 [2024-11-20 08:26:47.294246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.383 [2024-11-20 08:26:47.303632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.383 [2024-11-20 08:26:47.303653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.383 [2024-11-20 08:26:47.303662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.383 [2024-11-20 08:26:47.312934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.383 [2024-11-20 08:26:47.312955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:1785 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.383 [2024-11-20 08:26:47.312964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.383 [2024-11-20 08:26:47.323844] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.383 [2024-11-20 08:26:47.323865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:5739 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.383 [2024-11-20 08:26:47.323873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.383 [2024-11-20 08:26:47.332629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.383 [2024-11-20 08:26:47.332650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.383 [2024-11-20 08:26:47.332658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.383 [2024-11-20 08:26:47.344339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.383 [2024-11-20 08:26:47.344360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:4884 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.383 [2024-11-20 08:26:47.344368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.383 [2024-11-20 08:26:47.356542] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.383 [2024-11-20 08:26:47.356563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:16076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.383 [2024-11-20 08:26:47.356571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.383 [2024-11-20 08:26:47.368316] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.383 [2024-11-20 08:26:47.368339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.383 [2024-11-20 08:26:47.368347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.383 [2024-11-20 08:26:47.377711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.383 [2024-11-20 08:26:47.377733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.383 [2024-11-20 08:26:47.377741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.383 [2024-11-20 08:26:47.389688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.383 [2024-11-20 08:26:47.389710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:21653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.383 [2024-11-20 08:26:47.389718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.383 [2024-11-20 08:26:47.402106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.383 [2024-11-20 08:26:47.402128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19132 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.383 [2024-11-20 08:26:47.402136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.643 [2024-11-20 08:26:47.414146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.643 [2024-11-20 08:26:47.414168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:1385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.643 [2024-11-20 08:26:47.414179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.643 [2024-11-20 08:26:47.425297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.643 [2024-11-20 08:26:47.425317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.643 [2024-11-20 08:26:47.425325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.643 [2024-11-20 08:26:47.433533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.643 [2024-11-20 08:26:47.433554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6675 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.643 [2024-11-20 08:26:47.433562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.643 [2024-11-20 08:26:47.445669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.643 [2024-11-20 08:26:47.445688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15131 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.643 [2024-11-20 08:26:47.445696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.643 [2024-11-20 08:26:47.453855] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.643 [2024-11-20 08:26:47.453875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:23272 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.643 [2024-11-20 08:26:47.453884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.643 [2024-11-20 08:26:47.464376] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.643 [2024-11-20 08:26:47.464396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:10116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.644 [2024-11-20 08:26:47.464404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.644 [2024-11-20 08:26:47.475041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.644 [2024-11-20 08:26:47.475061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:24722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.644 [2024-11-20 08:26:47.475069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.644 [2024-11-20 08:26:47.483570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.644 [2024-11-20 08:26:47.483590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16617 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.644 [2024-11-20 08:26:47.483599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.644 [2024-11-20 08:26:47.494198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.644 [2024-11-20 08:26:47.494223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:20348 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.644 [2024-11-20 08:26:47.494232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.644 [2024-11-20 08:26:47.505413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.644 [2024-11-20 08:26:47.505439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.644 [2024-11-20 08:26:47.505448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.644 [2024-11-20 08:26:47.513833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.644 [2024-11-20 08:26:47.513854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.644 [2024-11-20 08:26:47.513862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.644 [2024-11-20 08:26:47.525984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.644 [2024-11-20 08:26:47.526005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:5075 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.644 [2024-11-20 08:26:47.526013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.644 [2024-11-20 08:26:47.537056] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.644 [2024-11-20 08:26:47.537077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:16177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.644 [2024-11-20 08:26:47.537085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.644 [2024-11-20 08:26:47.546053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.644 [2024-11-20 08:26:47.546073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.644 [2024-11-20 08:26:47.546082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.644 [2024-11-20 08:26:47.557280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.644 [2024-11-20 08:26:47.557301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:19602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.644 [2024-11-20 08:26:47.557310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.644 [2024-11-20 08:26:47.568119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.644 [2024-11-20 08:26:47.568141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:18488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.644 [2024-11-20 08:26:47.568149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.644 [2024-11-20 08:26:47.578659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.644 [2024-11-20 08:26:47.578680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8256 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.644 [2024-11-20 08:26:47.578689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.644 [2024-11-20 08:26:47.587019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.644 [2024-11-20 08:26:47.587042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11851 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.644 [2024-11-20 08:26:47.587054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.644 [2024-11-20 08:26:47.599063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.644 [2024-11-20 08:26:47.599086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.644 [2024-11-20 08:26:47.599095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.644 [2024-11-20 08:26:47.610164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.644 [2024-11-20 08:26:47.610186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:299 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.644 [2024-11-20 08:26:47.610194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.644 [2024-11-20 08:26:47.619003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.644 [2024-11-20 08:26:47.619023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2328 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.644 [2024-11-20 08:26:47.619032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.644 [2024-11-20 08:26:47.631012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.644 [2024-11-20 08:26:47.631033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4051 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.644 [2024-11-20 08:26:47.631041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.644 [2024-11-20 08:26:47.642388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.644 [2024-11-20 08:26:47.642408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:22639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.644 [2024-11-20 08:26:47.642416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.644 [2024-11-20 08:26:47.650937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.644 [2024-11-20 08:26:47.650957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:5737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.644 [2024-11-20 08:26:47.650965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.644 [2024-11-20 08:26:47.663371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.644 [2024-11-20 08:26:47.663392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:8787 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.644 [2024-11-20 08:26:47.663401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.904 [2024-11-20 08:26:47.675109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.904 [2024-11-20 08:26:47.675132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:18731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.904 [2024-11-20 08:26:47.675140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.904 [2024-11-20 08:26:47.683739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.904 [2024-11-20 08:26:47.683765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.904 [2024-11-20 08:26:47.683774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.904 [2024-11-20 08:26:47.695455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.904 [2024-11-20 08:26:47.695476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:13922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.904 [2024-11-20 08:26:47.695484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.904 [2024-11-20 08:26:47.703983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.904 [2024-11-20 08:26:47.704004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.904 [2024-11-20 08:26:47.704012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.904 [2024-11-20 08:26:47.716270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.904 [2024-11-20 08:26:47.716292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:19286 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.904 [2024-11-20 08:26:47.716301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.904 [2024-11-20 08:26:47.727103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.904 [2024-11-20 08:26:47.727127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.904 [2024-11-20 08:26:47.727135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.904 [2024-11-20 08:26:47.739644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.904 [2024-11-20 08:26:47.739668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24629 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.904 [2024-11-20 08:26:47.739676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.904 [2024-11-20 08:26:47.752399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.904 [2024-11-20 08:26:47.752421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:14159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.904 [2024-11-20 08:26:47.752429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.904 [2024-11-20 08:26:47.764885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.904 [2024-11-20 08:26:47.764907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.904 [2024-11-20 08:26:47.764915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.904 [2024-11-20 08:26:47.774506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.904 [2024-11-20 08:26:47.774527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:6321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.904 [2024-11-20 08:26:47.774535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.904 [2024-11-20 08:26:47.783264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.904 [2024-11-20 08:26:47.783285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:4531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.904 [2024-11-20 08:26:47.783293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.904 [2024-11-20 08:26:47.793874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.904 [2024-11-20 08:26:47.793895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:4407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.904 [2024-11-20 08:26:47.793904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.904 [2024-11-20 08:26:47.803496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.905 [2024-11-20 08:26:47.803518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:11960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.905 [2024-11-20 08:26:47.803526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.905 [2024-11-20 08:26:47.815098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.905 [2024-11-20 08:26:47.815120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.905 [2024-11-20 08:26:47.815128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.905 [2024-11-20 08:26:47.823460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.905 [2024-11-20 08:26:47.823482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:14682 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.905 [2024-11-20 08:26:47.823490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.905 [2024-11-20 08:26:47.835488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.905 [2024-11-20 08:26:47.835509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:9169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.905 [2024-11-20 08:26:47.835518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.905 [2024-11-20 08:26:47.847893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.905 [2024-11-20 08:26:47.847914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:3183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.905 [2024-11-20 08:26:47.847922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.905 [2024-11-20 08:26:47.855847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.905 [2024-11-20 08:26:47.855869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5967 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.905 [2024-11-20 08:26:47.855877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.905 [2024-11-20 08:26:47.867149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.905 [2024-11-20 08:26:47.867171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:22618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.905 [2024-11-20 08:26:47.867182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.905 [2024-11-20 08:26:47.879405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.905 [2024-11-20 08:26:47.879426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5285 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.905 [2024-11-20 08:26:47.879434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.905 [2024-11-20 08:26:47.891997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.905 [2024-11-20 08:26:47.892018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:1017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.905 [2024-11-20 08:26:47.892026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.905 [2024-11-20 08:26:47.902278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.905 [2024-11-20 08:26:47.902300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19162 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.905 [2024-11-20 08:26:47.902308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.905 [2024-11-20 08:26:47.910667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.905 [2024-11-20 08:26:47.910688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.905 [2024-11-20 08:26:47.910697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:33.905 [2024-11-20 08:26:47.919597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:33.905 [2024-11-20 08:26:47.919618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.905 [2024-11-20 08:26:47.919626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:47.929221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:47.929244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:47.929254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:47.938638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:47.938659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:5357 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:47.938668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:47.948061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:47.948082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:2707 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:47.948091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:47.957162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:47.957191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17444 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:47.957200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:47.966756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:47.966778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:12306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:47.966786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:47.975321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:47.975343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14539 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:47.975351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:47.985810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:47.985831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:47.985839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:47.998787] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:47.998808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:47.998816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:48.010861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:48.010882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:48.010890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:48.022232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:48.022253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:17128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:48.022261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:48.032263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:48.032284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:3995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:48.032292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:48.040782] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:48.040803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:6095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:48.040812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:48.051794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:48.051815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:25344 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:48.051823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:48.062875] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:48.062896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:48.062905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:48.071425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:48.071447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:16177 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:48.071455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:48.082506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:48.082528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15760 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:48.082536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:48.091991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:48.092012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:48.092021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:48.100825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:48.100847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:1436 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:48.100855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:48.111602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:48.111624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:25005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:48.111633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:48.122254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:48.122276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:25281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:48.122284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.165 [2024-11-20 08:26:48.131272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.165 [2024-11-20 08:26:48.131297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:3008 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.165 [2024-11-20 08:26:48.131306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.166 [2024-11-20 08:26:48.140662] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.166 [2024-11-20 08:26:48.140683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.166 [2024-11-20 08:26:48.140691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.166 [2024-11-20 08:26:48.150258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.166 [2024-11-20 08:26:48.150280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:13070 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.166 [2024-11-20 08:26:48.150288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.166 [2024-11-20 08:26:48.159348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.166 [2024-11-20 08:26:48.159369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:1600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.166 [2024-11-20 08:26:48.159378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.166 [2024-11-20 08:26:48.169482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.166 [2024-11-20 08:26:48.169503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:25503 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.166 [2024-11-20 08:26:48.169512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.166 [2024-11-20 08:26:48.177932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1f84d70) 00:29:34.166 [2024-11-20 08:26:48.177952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17507 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:34.166 [2024-11-20 08:26:48.177960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:29:34.166 24749.50 IOPS, 96.68 MiB/s 00:29:34.166 Latency(us) 00:29:34.166 [2024-11-20T07:26:48.194Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.166 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:29:34.166 nvme0n1 : 2.00 24772.97 96.77 0.00 0.00 5162.12 2387.38 18724.57 00:29:34.166 [2024-11-20T07:26:48.194Z] =================================================================================================================== 00:29:34.166 [2024-11-20T07:26:48.194Z] Total : 24772.97 96.77 0.00 0.00 5162.12 2387.38 18724.57 00:29:34.426 { 00:29:34.426 "results": [ 00:29:34.426 { 00:29:34.426 "job": "nvme0n1", 00:29:34.426 "core_mask": "0x2", 00:29:34.426 "workload": "randread", 00:29:34.426 "status": "finished", 00:29:34.426 "queue_depth": 128, 00:29:34.426 "io_size": 4096, 00:29:34.426 "runtime": 2.00412, 00:29:34.426 "iops": 24772.967686565673, 00:29:34.426 "mibps": 96.76940502564716, 00:29:34.426 "io_failed": 0, 00:29:34.426 "io_timeout": 0, 00:29:34.426 "avg_latency_us": 5162.118311311633, 00:29:34.426 "min_latency_us": 2387.382857142857, 00:29:34.426 "max_latency_us": 18724.571428571428 00:29:34.426 } 00:29:34.426 ], 00:29:34.426 "core_count": 1 00:29:34.426 } 00:29:34.426 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:34.426 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:34.426 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:34.426 | .driver_specific 00:29:34.426 | .nvme_error 00:29:34.426 | .status_code 00:29:34.426 | .command_transient_transport_error' 00:29:34.426 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:34.426 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 194 > 0 )) 00:29:34.426 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1847980 00:29:34.426 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1847980 ']' 00:29:34.426 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1847980 00:29:34.426 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:34.426 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:34.426 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1847980 00:29:34.686 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:34.686 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:34.686 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1847980' 00:29:34.686 killing process with pid 1847980 00:29:34.686 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1847980 00:29:34.686 Received shutdown signal, test time was about 2.000000 seconds 00:29:34.686 00:29:34.686 Latency(us) 00:29:34.686 [2024-11-20T07:26:48.714Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.686 [2024-11-20T07:26:48.714Z] =================================================================================================================== 00:29:34.686 [2024-11-20T07:26:48.714Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:34.686 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1847980 00:29:34.686 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:29:34.686 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:34.686 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:29:34.686 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:34.686 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:34.686 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:29:34.686 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1848455 00:29:34.686 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1848455 /var/tmp/bperf.sock 00:29:34.686 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1848455 ']' 00:29:34.686 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:34.686 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:34.686 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:34.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:34.686 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:34.686 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:34.686 [2024-11-20 08:26:48.644730] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:29:34.686 [2024-11-20 08:26:48.644777] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1848455 ] 00:29:34.686 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:34.686 Zero copy mechanism will not be used. 00:29:34.686 [2024-11-20 08:26:48.700210] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.946 [2024-11-20 08:26:48.740432] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.946 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.946 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:34.946 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:34.946 08:26:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:35.205 08:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:35.205 08:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.205 08:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:35.205 08:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.205 08:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:35.205 08:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:35.463 nvme0n1 00:29:35.463 08:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:35.464 08:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:35.464 08:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:35.464 08:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:35.464 08:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:35.464 08:26:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:35.464 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:35.464 Zero copy mechanism will not be used. 00:29:35.464 Running I/O for 2 seconds... 00:29:35.464 [2024-11-20 08:26:49.436447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.464 [2024-11-20 08:26:49.436492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.464 [2024-11-20 08:26:49.436503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.464 [2024-11-20 08:26:49.441755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.464 [2024-11-20 08:26:49.441782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.464 [2024-11-20 08:26:49.441792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.464 [2024-11-20 08:26:49.447099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.464 [2024-11-20 08:26:49.447123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.464 [2024-11-20 08:26:49.447132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.464 [2024-11-20 08:26:49.452347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.464 [2024-11-20 08:26:49.452371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.464 [2024-11-20 08:26:49.452380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.464 [2024-11-20 08:26:49.457584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.464 [2024-11-20 08:26:49.457607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.464 [2024-11-20 08:26:49.457615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.464 [2024-11-20 08:26:49.462813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.464 [2024-11-20 08:26:49.462835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.464 [2024-11-20 08:26:49.462844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.464 [2024-11-20 08:26:49.467997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.464 [2024-11-20 08:26:49.468020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.464 [2024-11-20 08:26:49.468028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.464 [2024-11-20 08:26:49.473277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.464 [2024-11-20 08:26:49.473298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.464 [2024-11-20 08:26:49.473307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.464 [2024-11-20 08:26:49.478607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.464 [2024-11-20 08:26:49.478630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.464 [2024-11-20 08:26:49.478638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.464 [2024-11-20 08:26:49.483826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.464 [2024-11-20 08:26:49.483850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.464 [2024-11-20 08:26:49.483858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.725 [2024-11-20 08:26:49.489001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.725 [2024-11-20 08:26:49.489024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.725 [2024-11-20 08:26:49.489037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.725 [2024-11-20 08:26:49.491840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.725 [2024-11-20 08:26:49.491863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.725 [2024-11-20 08:26:49.491871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.725 [2024-11-20 08:26:49.497022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.725 [2024-11-20 08:26:49.497046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.725 [2024-11-20 08:26:49.497054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.725 [2024-11-20 08:26:49.502244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.725 [2024-11-20 08:26:49.502266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.725 [2024-11-20 08:26:49.502273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.725 [2024-11-20 08:26:49.507540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.725 [2024-11-20 08:26:49.507562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.725 [2024-11-20 08:26:49.507570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.725 [2024-11-20 08:26:49.512773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.725 [2024-11-20 08:26:49.512795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.725 [2024-11-20 08:26:49.512803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.725 [2024-11-20 08:26:49.517892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.725 [2024-11-20 08:26:49.517914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.725 [2024-11-20 08:26:49.517923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.725 [2024-11-20 08:26:49.523128] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.725 [2024-11-20 08:26:49.523150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.725 [2024-11-20 08:26:49.523157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.725 [2024-11-20 08:26:49.528372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.725 [2024-11-20 08:26:49.528394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.725 [2024-11-20 08:26:49.528402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.725 [2024-11-20 08:26:49.533479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.725 [2024-11-20 08:26:49.533505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.725 [2024-11-20 08:26:49.533513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.725 [2024-11-20 08:26:49.538623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.725 [2024-11-20 08:26:49.538645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.725 [2024-11-20 08:26:49.538654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.725 [2024-11-20 08:26:49.543876] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.725 [2024-11-20 08:26:49.543899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.725 [2024-11-20 08:26:49.543907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.725 [2024-11-20 08:26:49.549041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.725 [2024-11-20 08:26:49.549064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.725 [2024-11-20 08:26:49.549072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.725 [2024-11-20 08:26:49.554252] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.725 [2024-11-20 08:26:49.554276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.725 [2024-11-20 08:26:49.554284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.725 [2024-11-20 08:26:49.558905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.725 [2024-11-20 08:26:49.558928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.725 [2024-11-20 08:26:49.558937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.725 [2024-11-20 08:26:49.564065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.725 [2024-11-20 08:26:49.564088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.725 [2024-11-20 08:26:49.564097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.725 [2024-11-20 08:26:49.569230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.725 [2024-11-20 08:26:49.569251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.725 [2024-11-20 08:26:49.569259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.725 [2024-11-20 08:26:49.574411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.725 [2024-11-20 08:26:49.574433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.725 [2024-11-20 08:26:49.574441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.725 [2024-11-20 08:26:49.579073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.725 [2024-11-20 08:26:49.579095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.725 [2024-11-20 08:26:49.579103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.725 [2024-11-20 08:26:49.583738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.583760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.583768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.588766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.588788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.588796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.593796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.593818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.593827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.598785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.598806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.598814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.604021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.604043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.604051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.608583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.608605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.608613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.611655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.611677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.611685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.616622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.616649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.616658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.621716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.621736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.621744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.626516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.626538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.626546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.631416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.631437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.631445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.636233] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.636254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.636262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.640978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.641000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.641008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.646341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.646363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.646371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.651719] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.651742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.651750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.657648] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.657672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.657681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.663220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.663242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.663250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.668798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.668819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.668827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.674310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.674332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.674340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.679623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.679644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.679652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.685021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.685043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.685051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.690247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.690269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.726 [2024-11-20 08:26:49.690277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.726 [2024-11-20 08:26:49.695548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.726 [2024-11-20 08:26:49.695570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.727 [2024-11-20 08:26:49.695578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.727 [2024-11-20 08:26:49.700919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.727 [2024-11-20 08:26:49.700941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.727 [2024-11-20 08:26:49.700949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.727 [2024-11-20 08:26:49.706176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.727 [2024-11-20 08:26:49.706197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.727 [2024-11-20 08:26:49.706215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.727 [2024-11-20 08:26:49.711389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.727 [2024-11-20 08:26:49.711411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.727 [2024-11-20 08:26:49.711419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.727 [2024-11-20 08:26:49.716750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.727 [2024-11-20 08:26:49.716771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.727 [2024-11-20 08:26:49.716780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.727 [2024-11-20 08:26:49.721992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.727 [2024-11-20 08:26:49.722014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.727 [2024-11-20 08:26:49.722022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.727 [2024-11-20 08:26:49.727256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.727 [2024-11-20 08:26:49.727277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.727 [2024-11-20 08:26:49.727286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.727 [2024-11-20 08:26:49.732455] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.727 [2024-11-20 08:26:49.732475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.727 [2024-11-20 08:26:49.732483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.727 [2024-11-20 08:26:49.737791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.727 [2024-11-20 08:26:49.737814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.727 [2024-11-20 08:26:49.737822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.727 [2024-11-20 08:26:49.742915] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.727 [2024-11-20 08:26:49.742936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.727 [2024-11-20 08:26:49.742945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.988 [2024-11-20 08:26:49.748265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.988 [2024-11-20 08:26:49.748288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.988 [2024-11-20 08:26:49.748298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.988 [2024-11-20 08:26:49.753490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.988 [2024-11-20 08:26:49.753517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.988 [2024-11-20 08:26:49.753526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.988 [2024-11-20 08:26:49.758344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.988 [2024-11-20 08:26:49.758367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.988 [2024-11-20 08:26:49.758375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.988 [2024-11-20 08:26:49.763463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.988 [2024-11-20 08:26:49.763485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.988 [2024-11-20 08:26:49.763493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.988 [2024-11-20 08:26:49.768589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.988 [2024-11-20 08:26:49.768612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.988 [2024-11-20 08:26:49.768620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.988 [2024-11-20 08:26:49.773613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.988 [2024-11-20 08:26:49.773635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.988 [2024-11-20 08:26:49.773644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.988 [2024-11-20 08:26:49.778833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.988 [2024-11-20 08:26:49.778856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.988 [2024-11-20 08:26:49.778864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.988 [2024-11-20 08:26:49.784030] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.988 [2024-11-20 08:26:49.784052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.988 [2024-11-20 08:26:49.784060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.988 [2024-11-20 08:26:49.789451] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.988 [2024-11-20 08:26:49.789472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.988 [2024-11-20 08:26:49.789481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.988 [2024-11-20 08:26:49.794759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.988 [2024-11-20 08:26:49.794781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.988 [2024-11-20 08:26:49.794789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.988 [2024-11-20 08:26:49.800052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.988 [2024-11-20 08:26:49.800075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.988 [2024-11-20 08:26:49.800083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.805822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.805844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.805852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.811074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.811096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.811104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.816432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.816455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.816464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.821978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.822002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.822010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.827179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.827207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.827216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.833101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.833124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.833132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.838526] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.838551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.838560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.843657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.843678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.843691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.848758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.848781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.848790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.853941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.853962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.853970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.856737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.856758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.856766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.861575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.861597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.861606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.867129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.867151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.867158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.872611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.872633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.872641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.878624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.878647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.878655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.884948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.884971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.884980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.892459] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.892482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.892491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.900312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.900335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.900343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.908616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.908640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.908650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.916488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.916512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.916520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.924469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.924493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.924501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.932416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.932438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.932447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.940339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.940362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.940370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.948436] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.948459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.989 [2024-11-20 08:26:49.948468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.989 [2024-11-20 08:26:49.956553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.989 [2024-11-20 08:26:49.956576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.990 [2024-11-20 08:26:49.956585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.990 [2024-11-20 08:26:49.964344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.990 [2024-11-20 08:26:49.964368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.990 [2024-11-20 08:26:49.964376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.990 [2024-11-20 08:26:49.971839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.990 [2024-11-20 08:26:49.971862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.990 [2024-11-20 08:26:49.971871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:35.990 [2024-11-20 08:26:49.979755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.990 [2024-11-20 08:26:49.979778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.990 [2024-11-20 08:26:49.979786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:35.990 [2024-11-20 08:26:49.987379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.990 [2024-11-20 08:26:49.987402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.990 [2024-11-20 08:26:49.987410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:35.990 [2024-11-20 08:26:49.995918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.990 [2024-11-20 08:26:49.995941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.990 [2024-11-20 08:26:49.995950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:35.990 [2024-11-20 08:26:50.003769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:35.990 [2024-11-20 08:26:50.003793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:35.990 [2024-11-20 08:26:50.003803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.251 [2024-11-20 08:26:50.011792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.251 [2024-11-20 08:26:50.011819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.251 [2024-11-20 08:26:50.011828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.251 [2024-11-20 08:26:50.020110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.251 [2024-11-20 08:26:50.020135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.251 [2024-11-20 08:26:50.020145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.251 [2024-11-20 08:26:50.028225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.251 [2024-11-20 08:26:50.028248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.251 [2024-11-20 08:26:50.028261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.251 [2024-11-20 08:26:50.035188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.251 [2024-11-20 08:26:50.035218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.251 [2024-11-20 08:26:50.035227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.251 [2024-11-20 08:26:50.042684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.251 [2024-11-20 08:26:50.042711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.251 [2024-11-20 08:26:50.042725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.251 [2024-11-20 08:26:50.049083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.251 [2024-11-20 08:26:50.049107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.251 [2024-11-20 08:26:50.049115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.251 [2024-11-20 08:26:50.055547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.251 [2024-11-20 08:26:50.055570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.251 [2024-11-20 08:26:50.055579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.251 [2024-11-20 08:26:50.063054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.251 [2024-11-20 08:26:50.063078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.251 [2024-11-20 08:26:50.063087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.251 [2024-11-20 08:26:50.069478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.251 [2024-11-20 08:26:50.069499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.251 [2024-11-20 08:26:50.069508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.251 [2024-11-20 08:26:50.077191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.251 [2024-11-20 08:26:50.077220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.251 [2024-11-20 08:26:50.077229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.251 [2024-11-20 08:26:50.085132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.251 [2024-11-20 08:26:50.085160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.251 [2024-11-20 08:26:50.085170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.251 [2024-11-20 08:26:50.092517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.251 [2024-11-20 08:26:50.092541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.251 [2024-11-20 08:26:50.092549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.251 [2024-11-20 08:26:50.098778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.251 [2024-11-20 08:26:50.098801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.251 [2024-11-20 08:26:50.098809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.251 [2024-11-20 08:26:50.104957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.104981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.104990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.111593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.111628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.111653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.118188] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.118228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.118237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.124274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.124297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.124307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.130303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.130325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.130333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.137082] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.137104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.137113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.142501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.142524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.142536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.147866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.147888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.147896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.153560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.153583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.153591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.160655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.160678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.160686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.167616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.167650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.167659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.173053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.173076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.173085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.179266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.179289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.179298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.185677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.185701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.185710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.193658] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.193681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.193689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.200691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.200718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.200727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.207027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.207051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.207060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.212992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.213016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.213024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.218936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.218959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.218968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.224369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.224393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.224401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.230102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.230125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.230133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.235657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.235679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.235687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.241193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.241221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.241230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.246854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.246876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.246884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.252395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.252417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.252 [2024-11-20 08:26:50.252426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.252 [2024-11-20 08:26:50.255344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.252 [2024-11-20 08:26:50.255365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.253 [2024-11-20 08:26:50.255373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.253 [2024-11-20 08:26:50.260624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.253 [2024-11-20 08:26:50.260646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.253 [2024-11-20 08:26:50.260655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.253 [2024-11-20 08:26:50.266109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.253 [2024-11-20 08:26:50.266130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.253 [2024-11-20 08:26:50.266138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.253 [2024-11-20 08:26:50.271999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.253 [2024-11-20 08:26:50.272021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.253 [2024-11-20 08:26:50.272030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.513 [2024-11-20 08:26:50.277709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.513 [2024-11-20 08:26:50.277733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.513 [2024-11-20 08:26:50.277742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.513 [2024-11-20 08:26:50.283239] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.513 [2024-11-20 08:26:50.283261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.513 [2024-11-20 08:26:50.283270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.513 [2024-11-20 08:26:50.288736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.513 [2024-11-20 08:26:50.288758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.513 [2024-11-20 08:26:50.288767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.513 [2024-11-20 08:26:50.294184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.513 [2024-11-20 08:26:50.294212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.513 [2024-11-20 08:26:50.294226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.513 [2024-11-20 08:26:50.299852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.513 [2024-11-20 08:26:50.299875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.513 [2024-11-20 08:26:50.299883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.513 [2024-11-20 08:26:50.305543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.513 [2024-11-20 08:26:50.305565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.513 [2024-11-20 08:26:50.305574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.513 [2024-11-20 08:26:50.311969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.513 [2024-11-20 08:26:50.311992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.513 [2024-11-20 08:26:50.312001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.513 [2024-11-20 08:26:50.320333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.513 [2024-11-20 08:26:50.320356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.513 [2024-11-20 08:26:50.320365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.513 [2024-11-20 08:26:50.328342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.513 [2024-11-20 08:26:50.328365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.513 [2024-11-20 08:26:50.328374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.335038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.335063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.335072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.341881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.341905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.341914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.350791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.350815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.350824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.358217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.358244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.358252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.364735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.364758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.364766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.370304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.370327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.370336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.375824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.375847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.375856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.381270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.381292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.381301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.386805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.386827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.386835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.392212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.392234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.392242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.397504] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.397526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.397534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.402891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.402913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.402921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.408355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.408377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.408386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.413893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.413915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.413924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.419357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.419380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.419390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.424928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.424951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.424959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.430358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.430379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.430388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.514 5292.00 IOPS, 661.50 MiB/s [2024-11-20T07:26:50.542Z] [2024-11-20 08:26:50.437031] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.437054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.437062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.442463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.442485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.442494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.448198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.448225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.448250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.453741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.453764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.453775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.459118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.459140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.459149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.464597] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.464620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.464628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.470048] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.470070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.470079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.475428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.475460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.475468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.480824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.480847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.514 [2024-11-20 08:26:50.480855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.514 [2024-11-20 08:26:50.486539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.514 [2024-11-20 08:26:50.486561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.515 [2024-11-20 08:26:50.486569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.515 [2024-11-20 08:26:50.492060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.515 [2024-11-20 08:26:50.492082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.515 [2024-11-20 08:26:50.492089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.515 [2024-11-20 08:26:50.497608] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.515 [2024-11-20 08:26:50.497630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.515 [2024-11-20 08:26:50.497638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.515 [2024-11-20 08:26:50.503132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.515 [2024-11-20 08:26:50.503154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.515 [2024-11-20 08:26:50.503162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.515 [2024-11-20 08:26:50.508693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.515 [2024-11-20 08:26:50.508714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.515 [2024-11-20 08:26:50.508722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.515 [2024-11-20 08:26:50.514092] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.515 [2024-11-20 08:26:50.514114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.515 [2024-11-20 08:26:50.514122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.515 [2024-11-20 08:26:50.519491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.515 [2024-11-20 08:26:50.519512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.515 [2024-11-20 08:26:50.519520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.515 [2024-11-20 08:26:50.525022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.515 [2024-11-20 08:26:50.525045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.515 [2024-11-20 08:26:50.525053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.515 [2024-11-20 08:26:50.530405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.515 [2024-11-20 08:26:50.530427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.515 [2024-11-20 08:26:50.530435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.515 [2024-11-20 08:26:50.535878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.515 [2024-11-20 08:26:50.535900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.515 [2024-11-20 08:26:50.535908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.541256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.541279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.541287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.546958] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.546980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.546992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.552099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.552122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.552130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.557524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.557548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.557556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.563292] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.563314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.563323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.568937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.568959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.568968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.574416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.574440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.574449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.579714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.579737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.579745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.584866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.584889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.584897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.589993] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.590015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.590023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.595235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.595262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.595270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.600431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.600454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.600463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.605668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.605691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.605699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.610961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.610984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.610993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.616241] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.616263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.616271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.622245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.622267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.622275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.627802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.627831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.627839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.633096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.633119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.633127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.638309] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.638331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.638340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.643558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.643581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.643589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.649140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.649164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.649172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.654571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.775 [2024-11-20 08:26:50.654593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.775 [2024-11-20 08:26:50.654601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.775 [2024-11-20 08:26:50.660019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.660042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.660050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.665346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.665368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.665376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.670741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.670763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.670772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.676074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.676096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.676104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.681528] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.681551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.681559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.686980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.687003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.687014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.692426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.692461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.692470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.698012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.698035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.698044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.703428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.703450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.703458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.708852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.708875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.708885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.714254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.714276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.714284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.719437] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.719460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.719469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.724665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.724690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.724698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.729769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.729792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.729800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.734864] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.734890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.734898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.740041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.740064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.740071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.745110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.745132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.745141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.750304] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.750327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.750335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.755480] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.755502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.755510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.760634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.760655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.760663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.765690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.765712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.765720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.770862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.770886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.770894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.776049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.776072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.776080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.781365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.781387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.781394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.787094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.787117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.787126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:36.776 [2024-11-20 08:26:50.793826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:36.776 [2024-11-20 08:26:50.793850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:36.776 [2024-11-20 08:26:50.793858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.036 [2024-11-20 08:26:50.801387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.036 [2024-11-20 08:26:50.801412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.036 [2024-11-20 08:26:50.801421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.036 [2024-11-20 08:26:50.808922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.036 [2024-11-20 08:26:50.808947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.036 [2024-11-20 08:26:50.808956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.036 [2024-11-20 08:26:50.816531] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.036 [2024-11-20 08:26:50.816555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.036 [2024-11-20 08:26:50.816564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.036 [2024-11-20 08:26:50.824223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.036 [2024-11-20 08:26:50.824248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.036 [2024-11-20 08:26:50.824256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.036 [2024-11-20 08:26:50.832396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.832421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.832430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.839863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.839887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.839900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.848044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.848068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.848077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.856093] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.856119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.856128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.863427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.863452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.863461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.870724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.870747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.870756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.877956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.877980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.877989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.885819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.885844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.885852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.893628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.893651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.893660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.901151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.901174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.901183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.908343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.908373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.908382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.916295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.916318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.916327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.924843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.924867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.924876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.932280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.932304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.932312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.937683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.937708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.937716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.943244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.943266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.943275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.948586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.948609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.948617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.953837] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.953860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.953868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.959114] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.959137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.959146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.964479] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.964502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.964510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.969777] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.969799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.969808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.975077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.975100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.975108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.980356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.980389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.980398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.985583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.985605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.985613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.990484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.990508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.990517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:50.995679] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:50.995702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:50.995710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:51.000928] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:51.000951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:51.000959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:51.006131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:51.006153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:51.006165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:51.011333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:51.011355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:51.011364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:51.016590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:51.016624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:51.016633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:51.021817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:51.021838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:51.021846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:51.027057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:51.027079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:51.027087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:51.032256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:51.032277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.037 [2024-11-20 08:26:51.032287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.037 [2024-11-20 08:26:51.037431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.037 [2024-11-20 08:26:51.037453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.038 [2024-11-20 08:26:51.037461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.038 [2024-11-20 08:26:51.042693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.038 [2024-11-20 08:26:51.042715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.038 [2024-11-20 08:26:51.042724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.038 [2024-11-20 08:26:51.047975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.038 [2024-11-20 08:26:51.047997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.038 [2024-11-20 08:26:51.048007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.038 [2024-11-20 08:26:51.053285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.038 [2024-11-20 08:26:51.053307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.038 [2024-11-20 08:26:51.053315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.038 [2024-11-20 08:26:51.058593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.038 [2024-11-20 08:26:51.058616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.038 [2024-11-20 08:26:51.058624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 08:26:51.063884] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.298 [2024-11-20 08:26:51.063906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 08:26:51.063915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 08:26:51.069094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.298 [2024-11-20 08:26:51.069116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 08:26:51.069125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 08:26:51.074284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.298 [2024-11-20 08:26:51.074306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 08:26:51.074315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 08:26:51.079525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.298 [2024-11-20 08:26:51.079546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 08:26:51.079554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 08:26:51.084816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.298 [2024-11-20 08:26:51.084838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 08:26:51.084846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 08:26:51.090041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.298 [2024-11-20 08:26:51.090063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 08:26:51.090072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 08:26:51.095291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.298 [2024-11-20 08:26:51.095312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 08:26:51.095324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 08:26:51.100558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.298 [2024-11-20 08:26:51.100579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 08:26:51.100588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 08:26:51.105765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.298 [2024-11-20 08:26:51.105788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 08:26:51.105796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 08:26:51.111003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.298 [2024-11-20 08:26:51.111024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 08:26:51.111032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 08:26:51.116249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.298 [2024-11-20 08:26:51.116271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 08:26:51.116279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 08:26:51.121447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.298 [2024-11-20 08:26:51.121469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 08:26:51.121477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 08:26:51.126605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.298 [2024-11-20 08:26:51.126627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 08:26:51.126635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 08:26:51.131825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.298 [2024-11-20 08:26:51.131848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.298 [2024-11-20 08:26:51.131855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.298 [2024-11-20 08:26:51.137036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.298 [2024-11-20 08:26:51.137058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.137066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.142255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.142282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.142291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.147492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.147513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.147521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.152687] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.152708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.152716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.157898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.157920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.157928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.163104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.163127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.163135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.169017] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.169039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.169047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.174466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.174488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.174497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.179769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.179792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.179800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.184988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.185010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.185018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.190238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.190259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.190268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.195532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.195554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.195562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.200856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.200878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.200887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.206963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.206986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.206994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.214148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.214172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.214181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.220833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.220856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.220865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.228141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.228164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.228172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.235588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.235611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.235619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.243266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.243290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.243302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.249464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.249487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.249496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.254689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.254712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.254720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.259857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.259879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.259887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.265355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.265377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.265386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.271155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.271178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.271186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.276635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.276657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.276665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.280871] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.280892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.280901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.288249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.288271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.299 [2024-11-20 08:26:51.288279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.299 [2024-11-20 08:26:51.295985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.299 [2024-11-20 08:26:51.296012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.300 [2024-11-20 08:26:51.296020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.300 [2024-11-20 08:26:51.303936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.300 [2024-11-20 08:26:51.303957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.300 [2024-11-20 08:26:51.303965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.300 [2024-11-20 08:26:51.310558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.300 [2024-11-20 08:26:51.310580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.300 [2024-11-20 08:26:51.310589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.300 [2024-11-20 08:26:51.318501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.300 [2024-11-20 08:26:51.318523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.300 [2024-11-20 08:26:51.318531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.559 [2024-11-20 08:26:51.326736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.559 [2024-11-20 08:26:51.326757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.559 [2024-11-20 08:26:51.326766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.559 [2024-11-20 08:26:51.333564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.559 [2024-11-20 08:26:51.333585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.559 [2024-11-20 08:26:51.333593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.559 [2024-11-20 08:26:51.340924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.559 [2024-11-20 08:26:51.340946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.559 [2024-11-20 08:26:51.340955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.559 [2024-11-20 08:26:51.348934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.559 [2024-11-20 08:26:51.348956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.559 [2024-11-20 08:26:51.348964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.559 [2024-11-20 08:26:51.356179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.559 [2024-11-20 08:26:51.356207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.559 [2024-11-20 08:26:51.356216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.559 [2024-11-20 08:26:51.363751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.559 [2024-11-20 08:26:51.363774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.559 [2024-11-20 08:26:51.363783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.560 [2024-11-20 08:26:51.370994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.560 [2024-11-20 08:26:51.371017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.560 [2024-11-20 08:26:51.371025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.560 [2024-11-20 08:26:51.378402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.560 [2024-11-20 08:26:51.378425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.560 [2024-11-20 08:26:51.378434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.560 [2024-11-20 08:26:51.385854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.560 [2024-11-20 08:26:51.385876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.560 [2024-11-20 08:26:51.385885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.560 [2024-11-20 08:26:51.392752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.560 [2024-11-20 08:26:51.392773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.560 [2024-11-20 08:26:51.392781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.560 [2024-11-20 08:26:51.399854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.560 [2024-11-20 08:26:51.399876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.560 [2024-11-20 08:26:51.399884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.560 [2024-11-20 08:26:51.407995] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.560 [2024-11-20 08:26:51.408016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.560 [2024-11-20 08:26:51.408025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.560 [2024-11-20 08:26:51.414830] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.560 [2024-11-20 08:26:51.414853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.560 [2024-11-20 08:26:51.414861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:37.560 [2024-11-20 08:26:51.422641] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.560 [2024-11-20 08:26:51.422661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.560 [2024-11-20 08:26:51.422674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:37.560 [2024-11-20 08:26:51.430314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.560 [2024-11-20 08:26:51.430336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.560 [2024-11-20 08:26:51.430345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:37.560 5262.50 IOPS, 657.81 MiB/s [2024-11-20T07:26:51.588Z] [2024-11-20 08:26:51.479516] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x84fa30) 00:29:37.560 [2024-11-20 08:26:51.479537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:37.560 [2024-11-20 08:26:51.479546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:37.560 00:29:37.560 Latency(us) 00:29:37.560 [2024-11-20T07:26:51.588Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.560 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:29:37.560 nvme0n1 : 2.08 5060.22 632.53 0.00 0.00 3043.34 608.55 85883.37 00:29:37.560 [2024-11-20T07:26:51.588Z] =================================================================================================================== 00:29:37.560 [2024-11-20T07:26:51.588Z] Total : 5060.22 632.53 0.00 0.00 3043.34 608.55 85883.37 00:29:37.560 { 00:29:37.560 "results": [ 00:29:37.560 { 00:29:37.560 "job": "nvme0n1", 00:29:37.560 "core_mask": "0x2", 00:29:37.560 "workload": "randread", 00:29:37.560 "status": "finished", 00:29:37.560 "queue_depth": 16, 00:29:37.560 "io_size": 131072, 00:29:37.560 "runtime": 2.083113, 00:29:37.560 "iops": 5060.215168356205, 00:29:37.560 "mibps": 632.5268960445256, 00:29:37.560 "io_failed": 0, 00:29:37.560 "io_timeout": 0, 00:29:37.560 "avg_latency_us": 3043.336641233099, 00:29:37.560 "min_latency_us": 608.5485714285714, 00:29:37.560 "max_latency_us": 85883.36761904763 00:29:37.560 } 00:29:37.560 ], 00:29:37.560 "core_count": 1 00:29:37.560 } 00:29:37.560 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:37.560 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:37.560 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:37.560 | .driver_specific 00:29:37.560 | .nvme_error 00:29:37.560 | .status_code 00:29:37.560 | .command_transient_transport_error' 00:29:37.560 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:37.819 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 341 > 0 )) 00:29:37.819 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1848455 00:29:37.819 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1848455 ']' 00:29:37.819 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1848455 00:29:37.819 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:37.819 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:37.819 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1848455 00:29:37.819 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:37.819 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:37.819 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1848455' 00:29:37.819 killing process with pid 1848455 00:29:37.819 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1848455 00:29:37.819 Received shutdown signal, test time was about 2.000000 seconds 00:29:37.819 00:29:37.819 Latency(us) 00:29:37.819 [2024-11-20T07:26:51.847Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:37.819 [2024-11-20T07:26:51.847Z] =================================================================================================================== 00:29:37.819 [2024-11-20T07:26:51.847Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:37.819 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1848455 00:29:38.078 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:29:38.078 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:38.078 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:38.078 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:29:38.078 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:29:38.078 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1848928 00:29:38.078 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:29:38.078 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1848928 /var/tmp/bperf.sock 00:29:38.078 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1848928 ']' 00:29:38.078 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:38.078 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:38.079 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:38.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:38.079 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:38.079 08:26:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:38.079 [2024-11-20 08:26:51.990400] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:29:38.079 [2024-11-20 08:26:51.990446] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1848928 ] 00:29:38.079 [2024-11-20 08:26:52.065333] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:38.338 [2024-11-20 08:26:52.107429] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:38.338 08:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:38.338 08:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:38.338 08:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:38.338 08:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:38.597 08:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:38.597 08:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.597 08:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:38.597 08:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.597 08:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:38.597 08:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:38.856 nvme0n1 00:29:38.856 08:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:29:38.856 08:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:38.856 08:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:38.856 08:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:38.856 08:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:38.856 08:26:52 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:39.115 Running I/O for 2 seconds... 00:29:39.115 [2024-11-20 08:26:52.941042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ed4e8 00:29:39.115 [2024-11-20 08:26:52.941883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:15898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.115 [2024-11-20 08:26:52.941912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:29:39.115 [2024-11-20 08:26:52.950153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ed920 00:29:39.115 [2024-11-20 08:26:52.951065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:20385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.115 [2024-11-20 08:26:52.951088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:39.115 [2024-11-20 08:26:52.960144] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f2948 00:29:39.115 [2024-11-20 08:26:52.961184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.115 [2024-11-20 08:26:52.961209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:39.115 [2024-11-20 08:26:52.969421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e88f8 00:29:39.115 [2024-11-20 08:26:52.970568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.115 [2024-11-20 08:26:52.970588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:39.115 [2024-11-20 08:26:52.976710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eb328 00:29:39.115 [2024-11-20 08:26:52.977427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:20635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.115 [2024-11-20 08:26:52.977447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:39.116 [2024-11-20 08:26:52.985807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f5378 00:29:39.116 [2024-11-20 08:26:52.986532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:7569 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.116 [2024-11-20 08:26:52.986555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:39.116 [2024-11-20 08:26:52.994685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f6458 00:29:39.116 [2024-11-20 08:26:52.995405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:1742 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.116 [2024-11-20 08:26:52.995425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:39.116 [2024-11-20 08:26:53.003896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f46d0 00:29:39.116 [2024-11-20 08:26:53.004396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.116 [2024-11-20 08:26:53.004417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:39.116 [2024-11-20 08:26:53.013231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f4298 00:29:39.116 [2024-11-20 08:26:53.013846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:21673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.116 [2024-11-20 08:26:53.013865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:39.116 [2024-11-20 08:26:53.023361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e6738 00:29:39.116 [2024-11-20 08:26:53.024633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17628 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.116 [2024-11-20 08:26:53.024653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:39.116 [2024-11-20 08:26:53.031694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ec408 00:29:39.116 [2024-11-20 08:26:53.032860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:16903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.116 [2024-11-20 08:26:53.032888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:39.116 [2024-11-20 08:26:53.040903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fac10 00:29:39.116 [2024-11-20 08:26:53.042075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.116 [2024-11-20 08:26:53.042094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:39.116 [2024-11-20 08:26:53.048185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e49b0 00:29:39.116 [2024-11-20 08:26:53.048924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:23543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.116 [2024-11-20 08:26:53.048943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:39.116 [2024-11-20 08:26:53.057140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f0350 00:29:39.116 [2024-11-20 08:26:53.057858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:24752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.116 [2024-11-20 08:26:53.057877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:39.116 [2024-11-20 08:26:53.066056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ef270 00:29:39.116 [2024-11-20 08:26:53.066790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13868 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.116 [2024-11-20 08:26:53.066809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:39.116 [2024-11-20 08:26:53.075011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ee190 00:29:39.116 [2024-11-20 08:26:53.075732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:22282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.116 [2024-11-20 08:26:53.075751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:39.116 [2024-11-20 08:26:53.083922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eaef0 00:29:39.116 [2024-11-20 08:26:53.084636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.116 [2024-11-20 08:26:53.084655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:39.116 [2024-11-20 08:26:53.092828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ddc00 00:29:39.116 [2024-11-20 08:26:53.093539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:11500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.116 [2024-11-20 08:26:53.093558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:39.116 [2024-11-20 08:26:53.101753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f35f0 00:29:39.116 [2024-11-20 08:26:53.102475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:24825 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.116 [2024-11-20 08:26:53.102495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:39.116 [2024-11-20 08:26:53.110671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f2510 00:29:39.116 [2024-11-20 08:26:53.111387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:17081 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.116 [2024-11-20 08:26:53.111407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:39.116 [2024-11-20 08:26:53.119595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f1430 00:29:39.116 [2024-11-20 08:26:53.120314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:5397 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.116 [2024-11-20 08:26:53.120333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:39.116 [2024-11-20 08:26:53.128507] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eb760 00:29:39.116 [2024-11-20 08:26:53.129214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:1056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.116 [2024-11-20 08:26:53.129234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:39.116 [2024-11-20 08:26:53.137411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e7818 00:29:39.116 [2024-11-20 08:26:53.138145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:4185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.116 [2024-11-20 08:26:53.138164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:39.375 [2024-11-20 08:26:53.146659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e5a90 00:29:39.375 [2024-11-20 08:26:53.147143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:24843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.375 [2024-11-20 08:26:53.147163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:39.375 [2024-11-20 08:26:53.158069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e5220 00:29:39.375 [2024-11-20 08:26:53.159584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17978 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.375 [2024-11-20 08:26:53.159605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:39.375 [2024-11-20 08:26:53.164344] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166df988 00:29:39.375 [2024-11-20 08:26:53.164959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.375 [2024-11-20 08:26:53.164978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:39.375 [2024-11-20 08:26:53.173913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f0bc0 00:29:39.375 [2024-11-20 08:26:53.174877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:14441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.375 [2024-11-20 08:26:53.174896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:39.375 [2024-11-20 08:26:53.183292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f7da8 00:29:39.376 [2024-11-20 08:26:53.184273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:2261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.184293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.192989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f3e60 00:29:39.376 [2024-11-20 08:26:53.193994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20814 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.194015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.203399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166df550 00:29:39.376 [2024-11-20 08:26:53.204982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:23327 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.205002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.209978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e1710 00:29:39.376 [2024-11-20 08:26:53.210832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:23923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.210851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.219301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f7970 00:29:39.376 [2024-11-20 08:26:53.220298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:2719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.220321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.228429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e95a0 00:29:39.376 [2024-11-20 08:26:53.228957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:8181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.228977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.236823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e95a0 00:29:39.376 [2024-11-20 08:26:53.237290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:9080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.237311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.248217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f1430 00:29:39.376 [2024-11-20 08:26:53.249664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.249683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.254503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e5ec8 00:29:39.376 [2024-11-20 08:26:53.255041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5796 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.255061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.264086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ef6a8 00:29:39.376 [2024-11-20 08:26:53.264948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.264972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.273400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f96f8 00:29:39.376 [2024-11-20 08:26:53.274370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6120 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.274389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.282732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e73e0 00:29:39.376 [2024-11-20 08:26:53.283823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2196 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.283842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.292062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f1ca0 00:29:39.376 [2024-11-20 08:26:53.293278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.293297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.300265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e6b70 00:29:39.376 [2024-11-20 08:26:53.301481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.301502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.307899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f96f8 00:29:39.376 [2024-11-20 08:26:53.308538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.308558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.316939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e38d0 00:29:39.376 [2024-11-20 08:26:53.317584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.317603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.326285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ec408 00:29:39.376 [2024-11-20 08:26:53.326940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:21665 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.326959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.335041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e8088 00:29:39.376 [2024-11-20 08:26:53.335670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:7048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.335689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.345291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f8a50 00:29:39.376 [2024-11-20 08:26:53.346034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:7094 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.346053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.353587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e88f8 00:29:39.376 [2024-11-20 08:26:53.354302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.354322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.364000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f7970 00:29:39.376 [2024-11-20 08:26:53.365378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.365397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.370511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fe2e8 00:29:39.376 [2024-11-20 08:26:53.371143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:19098 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.371161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.380055] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fe720 00:29:39.376 [2024-11-20 08:26:53.380847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.380867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:29:39.376 [2024-11-20 08:26:53.391293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f6020 00:29:39.376 [2024-11-20 08:26:53.392558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:17603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.376 [2024-11-20 08:26:53.392578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:39.636 [2024-11-20 08:26:53.400596] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e4140 00:29:39.636 [2024-11-20 08:26:53.401887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:10826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.636 [2024-11-20 08:26:53.401907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:39.636 [2024-11-20 08:26:53.408128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f46d0 00:29:39.636 [2024-11-20 08:26:53.408601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:6976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.636 [2024-11-20 08:26:53.408620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:29:39.636 [2024-11-20 08:26:53.418265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fe2e8 00:29:39.636 [2024-11-20 08:26:53.419387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.636 [2024-11-20 08:26:53.419407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:29:39.636 [2024-11-20 08:26:53.426854] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e4140 00:29:39.636 [2024-11-20 08:26:53.427949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24669 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.636 [2024-11-20 08:26:53.427970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.435933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eaab8 00:29:39.637 [2024-11-20 08:26:53.437048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:20920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.437067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.445120] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e6b70 00:29:39.637 [2024-11-20 08:26:53.446247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:60 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.446266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.453832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166edd58 00:29:39.637 [2024-11-20 08:26:53.454882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:21854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.454907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.462364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e73e0 00:29:39.637 [2024-11-20 08:26:53.463374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:7234 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.463393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.473284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fac10 00:29:39.637 [2024-11-20 08:26:53.474751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:2586 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.474770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.479664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f3a28 00:29:39.637 [2024-11-20 08:26:53.480455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:9604 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.480475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.490769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f6458 00:29:39.637 [2024-11-20 08:26:53.491928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:12375 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.491948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.498500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fda78 00:29:39.637 [2024-11-20 08:26:53.498980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:1598 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.499000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.509047] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e6fa8 00:29:39.637 [2024-11-20 08:26:53.510225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:3449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.510245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.516211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f3e60 00:29:39.637 [2024-11-20 08:26:53.516723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:21050 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.516742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.525303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f20d8 00:29:39.637 [2024-11-20 08:26:53.526096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16338 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.526116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.534116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e95a0 00:29:39.637 [2024-11-20 08:26:53.534597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:11851 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.534617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.544245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f7970 00:29:39.637 [2024-11-20 08:26:53.545502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1422 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.545521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.553033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166df988 00:29:39.637 [2024-11-20 08:26:53.554086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.554106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.560743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166df550 00:29:39.637 [2024-11-20 08:26:53.561339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3744 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.561359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.569038] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fda78 00:29:39.637 [2024-11-20 08:26:53.569705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:5269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.569723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.578385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ea680 00:29:39.637 [2024-11-20 08:26:53.579084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:11709 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.579103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.587848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ea680 00:29:39.637 [2024-11-20 08:26:53.588559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3694 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.588579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.597070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fd208 00:29:39.637 [2024-11-20 08:26:53.597949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:22592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.597970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.606554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f1ca0 00:29:39.637 [2024-11-20 08:26:53.607680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.607699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.615893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e0ea0 00:29:39.637 [2024-11-20 08:26:53.617149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:21792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.617169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.624269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166de470 00:29:39.637 [2024-11-20 08:26:53.625157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.625177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.633305] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e95a0 00:29:39.637 [2024-11-20 08:26:53.634252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:2784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.634272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.643930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e95a0 00:29:39.637 [2024-11-20 08:26:53.645419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:20865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.645438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:39.637 [2024-11-20 08:26:53.650218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fa3a0 00:29:39.637 [2024-11-20 08:26:53.650898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:8318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.637 [2024-11-20 08:26:53.650918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:39.897 [2024-11-20 08:26:53.660390] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e5658 00:29:39.897 [2024-11-20 08:26:53.661483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:23989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.897 [2024-11-20 08:26:53.661503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:29:39.897 [2024-11-20 08:26:53.669472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e1f80 00:29:39.897 [2024-11-20 08:26:53.670172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.897 [2024-11-20 08:26:53.670191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:39.897 [2024-11-20 08:26:53.677874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fd208 00:29:39.897 [2024-11-20 08:26:53.679131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:14184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.679151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.685528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e3060 00:29:39.898 [2024-11-20 08:26:53.686218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.686242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.696348] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f7100 00:29:39.898 [2024-11-20 08:26:53.697426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:16950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.697446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.706861] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e2c28 00:29:39.898 [2024-11-20 08:26:53.708398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5459 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.708418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.713304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166dece0 00:29:39.898 [2024-11-20 08:26:53.714098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:4636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.714118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.723918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f96f8 00:29:39.898 [2024-11-20 08:26:53.724886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:5816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.724907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.732427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fb8b8 00:29:39.898 [2024-11-20 08:26:53.733292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:22506 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.733312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.740950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ddc00 00:29:39.898 [2024-11-20 08:26:53.741688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:14579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.741708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.750147] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fda78 00:29:39.898 [2024-11-20 08:26:53.751126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.751147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.760340] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fc560 00:29:39.898 [2024-11-20 08:26:53.761693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:12691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.761712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.769694] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e3060 00:29:39.898 [2024-11-20 08:26:53.771216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:12357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.771235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.776126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fa3a0 00:29:39.898 [2024-11-20 08:26:53.776842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:19762 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.776861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.785475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eb760 00:29:39.898 [2024-11-20 08:26:53.786323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:8834 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.786342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.794799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eaab8 00:29:39.898 [2024-11-20 08:26:53.795749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13414 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.795769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.805407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eaab8 00:29:39.898 [2024-11-20 08:26:53.806888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.806907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.811685] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e27f0 00:29:39.898 [2024-11-20 08:26:53.812469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:11184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.812488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.821856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e99d8 00:29:39.898 [2024-11-20 08:26:53.822591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:15613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.822611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.831034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e38d0 00:29:39.898 [2024-11-20 08:26:53.831976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.831995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.840964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e38d0 00:29:39.898 [2024-11-20 08:26:53.842440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.842459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.847260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fcdd0 00:29:39.898 [2024-11-20 08:26:53.847924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:8380 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.847944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.855762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fc998 00:29:39.898 [2024-11-20 08:26:53.856438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25041 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.856456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.865103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e8088 00:29:39.898 [2024-11-20 08:26:53.865888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:7303 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.865907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.874439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e7818 00:29:39.898 [2024-11-20 08:26:53.875336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.875356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.883777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eb760 00:29:39.898 [2024-11-20 08:26:53.884797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.884817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.893094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f4b08 00:29:39.898 [2024-11-20 08:26:53.894224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.894244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.902421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e8088 00:29:39.898 [2024-11-20 08:26:53.903680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:22354 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.903699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.910505] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f31b8 00:29:39.898 [2024-11-20 08:26:53.911779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.898 [2024-11-20 08:26:53.911799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:29:39.898 [2024-11-20 08:26:53.918145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eb760 00:29:39.899 [2024-11-20 08:26:53.918749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:12589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:39.899 [2024-11-20 08:26:53.918772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:29:40.186 28171.00 IOPS, 110.04 MiB/s [2024-11-20T07:26:54.214Z] [2024-11-20 08:26:53.929537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f5be8 00:29:40.186 [2024-11-20 08:26:53.930491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:15907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.186 [2024-11-20 08:26:53.930511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:40.186 [2024-11-20 08:26:53.937743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e8d30 00:29:40.186 [2024-11-20 08:26:53.938992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:9492 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.186 [2024-11-20 08:26:53.939012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:29:40.186 [2024-11-20 08:26:53.945379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e49b0 00:29:40.186 [2024-11-20 08:26:53.946073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:6074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.186 [2024-11-20 08:26:53.946092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:40.186 [2024-11-20 08:26:53.956436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fe2e8 00:29:40.186 [2024-11-20 08:26:53.957633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.186 [2024-11-20 08:26:53.957653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:40.186 [2024-11-20 08:26:53.966043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ddc00 00:29:40.186 [2024-11-20 08:26:53.967320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:14312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:53.967340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:53.975423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fc998 00:29:40.187 [2024-11-20 08:26:53.976812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:4037 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:53.976832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:53.984817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f7100 00:29:40.187 [2024-11-20 08:26:53.986322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:53.986340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:53.992628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ee190 00:29:40.187 [2024-11-20 08:26:53.993589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:53.993608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.001806] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e73e0 00:29:40.187 [2024-11-20 08:26:54.003019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:2197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.003040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.008462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ebfd0 00:29:40.187 [2024-11-20 08:26:54.009136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:18187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.009155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.017762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e8088 00:29:40.187 [2024-11-20 08:26:54.018581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.018600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.028595] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f4b08 00:29:40.187 [2024-11-20 08:26:54.029663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:6987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.029683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.038054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166de038 00:29:40.187 [2024-11-20 08:26:54.039476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:6341 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.039495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.044491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e2c28 00:29:40.187 [2024-11-20 08:26:54.045192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:17982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.045215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.053797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e1b48 00:29:40.187 [2024-11-20 08:26:54.054642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.054661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.064922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ee190 00:29:40.187 [2024-11-20 08:26:54.066227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:18609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.066248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.073892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e0630 00:29:40.187 [2024-11-20 08:26:54.074877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.074897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.082066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e0ea0 00:29:40.187 [2024-11-20 08:26:54.083147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:12903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.083167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.091446] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eea00 00:29:40.187 [2024-11-20 08:26:54.092646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:18438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.092665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.100784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e0a68 00:29:40.187 [2024-11-20 08:26:54.102096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19985 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.102116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.109004] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f35f0 00:29:40.187 [2024-11-20 08:26:54.110157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:24843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.110176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.118008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f9f68 00:29:40.187 [2024-11-20 08:26:54.119042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.119061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.126510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ea680 00:29:40.187 [2024-11-20 08:26:54.127481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:17908 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.127499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.135833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e9168 00:29:40.187 [2024-11-20 08:26:54.136827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:11579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.136846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.146457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e9168 00:29:40.187 [2024-11-20 08:26:54.147908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.147927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.155296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e84c0 00:29:40.187 [2024-11-20 08:26:54.156734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:6692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.156759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.161567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fcdd0 00:29:40.187 [2024-11-20 08:26:54.162196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.162218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.170895] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ee190 00:29:40.187 [2024-11-20 08:26:54.171656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.171675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:29:40.187 [2024-11-20 08:26:54.180332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e0a68 00:29:40.187 [2024-11-20 08:26:54.181300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.187 [2024-11-20 08:26:54.181332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:29:40.495 [2024-11-20 08:26:54.190002] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f6890 00:29:40.495 [2024-11-20 08:26:54.190905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.495 [2024-11-20 08:26:54.190927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:29:40.495 [2024-11-20 08:26:54.199236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ef270 00:29:40.495 [2024-11-20 08:26:54.200117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.495 [2024-11-20 08:26:54.200136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:40.495 [2024-11-20 08:26:54.208284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e5ec8 00:29:40.495 [2024-11-20 08:26:54.209157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13725 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.495 [2024-11-20 08:26:54.209176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:40.495 [2024-11-20 08:26:54.216681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f57b0 00:29:40.495 [2024-11-20 08:26:54.217559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:5313 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.495 [2024-11-20 08:26:54.217579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:40.495 [2024-11-20 08:26:54.225951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166df118 00:29:40.495 [2024-11-20 08:26:54.226468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:22590 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.495 [2024-11-20 08:26:54.226489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:40.495 [2024-11-20 08:26:54.236495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f7970 00:29:40.495 [2024-11-20 08:26:54.237793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.237812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.244911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f1ca0 00:29:40.496 [2024-11-20 08:26:54.245871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21987 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.245890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.253951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ebb98 00:29:40.496 [2024-11-20 08:26:54.254683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.254702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.262422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e9168 00:29:40.496 [2024-11-20 08:26:54.263728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:15279 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.263747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.270660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eb328 00:29:40.496 [2024-11-20 08:26:54.271393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:8835 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.271412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.279879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ecc78 00:29:40.496 [2024-11-20 08:26:54.280753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14864 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.280773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.288814] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ea680 00:29:40.496 [2024-11-20 08:26:54.289737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.289756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.297889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ed920 00:29:40.496 [2024-11-20 08:26:54.298819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.298838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.307052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f96f8 00:29:40.496 [2024-11-20 08:26:54.307974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:22599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.307993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.316123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eea00 00:29:40.496 [2024-11-20 08:26:54.317058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:9062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.317078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.325086] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eb328 00:29:40.496 [2024-11-20 08:26:54.326070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.326089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.334290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166efae0 00:29:40.496 [2024-11-20 08:26:54.335237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18205 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.335255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.343251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ef6a8 00:29:40.496 [2024-11-20 08:26:54.344176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.344195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.352194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f4b08 00:29:40.496 [2024-11-20 08:26:54.353133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:10705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.353152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.361127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166de038 00:29:40.496 [2024-11-20 08:26:54.362061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:11646 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.362080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.370126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fd640 00:29:40.496 [2024-11-20 08:26:54.371054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:20090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.371075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.379078] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e0630 00:29:40.496 [2024-11-20 08:26:54.380008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:1421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.380028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.389257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f1430 00:29:40.496 [2024-11-20 08:26:54.390571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:20981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.390593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.398584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f6020 00:29:40.496 [2024-11-20 08:26:54.400011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:11635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.400031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.406299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ed920 00:29:40.496 [2024-11-20 08:26:54.407280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23039 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.407299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.415462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eee38 00:29:40.496 [2024-11-20 08:26:54.416606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:16774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.416626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.422745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eb328 00:29:40.496 [2024-11-20 08:26:54.423443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:10014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.423462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.431686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eea00 00:29:40.496 [2024-11-20 08:26:54.432411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13306 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.432430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.440968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ed920 00:29:40.496 [2024-11-20 08:26:54.441913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:15764 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.441931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.449996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e88f8 00:29:40.496 [2024-11-20 08:26:54.450504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:14681 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.450523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.460277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fe720 00:29:40.496 [2024-11-20 08:26:54.461592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:16716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.461610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.468805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e38d0 00:29:40.496 [2024-11-20 08:26:54.469800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:16366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.496 [2024-11-20 08:26:54.469820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:40.496 [2024-11-20 08:26:54.477981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e1f80 00:29:40.496 [2024-11-20 08:26:54.478716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.497 [2024-11-20 08:26:54.478735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:40.497 [2024-11-20 08:26:54.487101] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f3e60 00:29:40.497 [2024-11-20 08:26:54.488157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:7736 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.497 [2024-11-20 08:26:54.488176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:40.497 [2024-11-20 08:26:54.496072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fc560 00:29:40.497 [2024-11-20 08:26:54.497183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:23344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.497 [2024-11-20 08:26:54.497206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:40.798 [2024-11-20 08:26:54.505284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166df988 00:29:40.798 [2024-11-20 08:26:54.506379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:7258 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.798 [2024-11-20 08:26:54.506399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:40.798 [2024-11-20 08:26:54.514395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e84c0 00:29:40.798 [2024-11-20 08:26:54.515475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:14587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.798 [2024-11-20 08:26:54.515495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:40.798 [2024-11-20 08:26:54.523303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ecc78 00:29:40.798 [2024-11-20 08:26:54.524402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.798 [2024-11-20 08:26:54.524421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:40.798 [2024-11-20 08:26:54.532481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166de8a8 00:29:40.798 [2024-11-20 08:26:54.533589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.798 [2024-11-20 08:26:54.533609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:40.798 [2024-11-20 08:26:54.541558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fb048 00:29:40.798 [2024-11-20 08:26:54.542626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.542645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.550683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fbcf0 00:29:40.799 [2024-11-20 08:26:54.551761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:19211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.551781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.559619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f1ca0 00:29:40.799 [2024-11-20 08:26:54.560692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:20955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.560711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.568552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e27f0 00:29:40.799 [2024-11-20 08:26:54.569600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:21059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.569618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.577492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fc128 00:29:40.799 [2024-11-20 08:26:54.578542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.578561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.586430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e23b8 00:29:40.799 [2024-11-20 08:26:54.587464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:22955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.587483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.594687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f4b08 00:29:40.799 [2024-11-20 08:26:54.595970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:9973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.595989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.602917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f96f8 00:29:40.799 [2024-11-20 08:26:54.603633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.603652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.611811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166fd208 00:29:40.799 [2024-11-20 08:26:54.612544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:10163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.612563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.620722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f3e60 00:29:40.799 [2024-11-20 08:26:54.621465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:1839 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.621484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.629928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eb760 00:29:40.799 [2024-11-20 08:26:54.630751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.630771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.638824] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e7818 00:29:40.799 [2024-11-20 08:26:54.639741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:12182 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.639760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.648174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e1710 00:29:40.799 [2024-11-20 08:26:54.649210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:565 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.649228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.656476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e8d30 00:29:40.799 [2024-11-20 08:26:54.657169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:6048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.657188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.665506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166df550 00:29:40.799 [2024-11-20 08:26:54.666008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:12157 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.666028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.674825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f2510 00:29:40.799 [2024-11-20 08:26:54.675459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:19035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.675479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.685088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166edd58 00:29:40.799 [2024-11-20 08:26:54.686490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:14647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.686509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.694447] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e5220 00:29:40.799 [2024-11-20 08:26:54.695934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15029 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.695954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.700723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f57b0 00:29:40.799 [2024-11-20 08:26:54.701431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:4707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.701454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.709634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166df988 00:29:40.799 [2024-11-20 08:26:54.710462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:3426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.710482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.719839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e3d08 00:29:40.799 [2024-11-20 08:26:54.720829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:16103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.720848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.728995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e1b48 00:29:40.799 [2024-11-20 08:26:54.729942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:18902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.729962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.737970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f35f0 00:29:40.799 [2024-11-20 08:26:54.738913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.738932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.746900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e27f0 00:29:40.799 [2024-11-20 08:26:54.747829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:2603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.747847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.756143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e7818 00:29:40.799 [2024-11-20 08:26:54.757175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:16288 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.757194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.765176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e8088 00:29:40.799 [2024-11-20 08:26:54.766233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:16923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.766252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.774094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166efae0 00:29:40.799 [2024-11-20 08:26:54.775082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20223 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.799 [2024-11-20 08:26:54.775101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:40.799 [2024-11-20 08:26:54.783018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f6cc8 00:29:40.799 [2024-11-20 08:26:54.784163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:1253 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.800 [2024-11-20 08:26:54.784183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:29:40.800 [2024-11-20 08:26:54.791608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166feb58 00:29:40.800 [2024-11-20 08:26:54.792605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.800 [2024-11-20 08:26:54.792624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:29:40.800 [2024-11-20 08:26:54.801155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ea248 00:29:40.800 [2024-11-20 08:26:54.802235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:16970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.800 [2024-11-20 08:26:54.802256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:29:40.800 [2024-11-20 08:26:54.810462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e99d8 00:29:40.800 [2024-11-20 08:26:54.811730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.800 [2024-11-20 08:26:54.811749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:29:40.800 [2024-11-20 08:26:54.818755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e88f8 00:29:40.800 [2024-11-20 08:26:54.819617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:6768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:40.800 [2024-11-20 08:26:54.819635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:41.059 [2024-11-20 08:26:54.828773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ff3c8 00:29:41.059 [2024-11-20 08:26:54.830146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:8843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.059 [2024-11-20 08:26:54.830165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:29:41.059 [2024-11-20 08:26:54.836652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eaab8 00:29:41.059 [2024-11-20 08:26:54.837382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:22748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.059 [2024-11-20 08:26:54.837402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:41.059 [2024-11-20 08:26:54.847020] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eaef0 00:29:41.059 [2024-11-20 08:26:54.848517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.059 [2024-11-20 08:26:54.848537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:29:41.059 [2024-11-20 08:26:54.853322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e4140 00:29:41.059 [2024-11-20 08:26:54.854024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:602 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.059 [2024-11-20 08:26:54.854043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:29:41.059 [2024-11-20 08:26:54.862776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f0788 00:29:41.059 [2024-11-20 08:26:54.863736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:1636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.059 [2024-11-20 08:26:54.863755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:29:41.059 [2024-11-20 08:26:54.871881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e6b70 00:29:41.059 [2024-11-20 08:26:54.872402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:25079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.059 [2024-11-20 08:26:54.872421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:41.059 [2024-11-20 08:26:54.882131] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eee38 00:29:41.059 [2024-11-20 08:26:54.883439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:11088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.059 [2024-11-20 08:26:54.883458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:29:41.059 [2024-11-20 08:26:54.891539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166eb760 00:29:41.059 [2024-11-20 08:26:54.892975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.059 [2024-11-20 08:26:54.892996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:29:41.059 [2024-11-20 08:26:54.898012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166f0ff8 00:29:41.060 [2024-11-20 08:26:54.898733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:1171 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.060 [2024-11-20 08:26:54.898753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:29:41.060 [2024-11-20 08:26:54.909136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166ea248 00:29:41.060 [2024-11-20 08:26:54.910344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:2419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.060 [2024-11-20 08:26:54.910363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:29:41.060 [2024-11-20 08:26:54.917564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e3d08 00:29:41.060 [2024-11-20 08:26:54.918504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.060 [2024-11-20 08:26:54.918524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.060 [2024-11-20 08:26:54.926621] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439640) with pdu=0x2000166e4140 00:29:41.060 [2024-11-20 08:26:54.928717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:41.060 [2024-11-20 08:26:54.928738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:41.060 28289.50 IOPS, 110.51 MiB/s 00:29:41.060 Latency(us) 00:29:41.060 [2024-11-20T07:26:55.088Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.060 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:41.060 nvme0n1 : 2.00 28313.30 110.60 0.00 0.00 4516.38 1755.43 15541.39 00:29:41.060 [2024-11-20T07:26:55.088Z] =================================================================================================================== 00:29:41.060 [2024-11-20T07:26:55.088Z] Total : 28313.30 110.60 0.00 0.00 4516.38 1755.43 15541.39 00:29:41.060 { 00:29:41.060 "results": [ 00:29:41.060 { 00:29:41.060 "job": "nvme0n1", 00:29:41.060 "core_mask": "0x2", 00:29:41.060 "workload": "randwrite", 00:29:41.060 "status": "finished", 00:29:41.060 "queue_depth": 128, 00:29:41.060 "io_size": 4096, 00:29:41.060 "runtime": 2.00284, 00:29:41.060 "iops": 28313.29512092828, 00:29:41.060 "mibps": 110.5988090661261, 00:29:41.060 "io_failed": 0, 00:29:41.060 "io_timeout": 0, 00:29:41.060 "avg_latency_us": 4516.377200429611, 00:29:41.060 "min_latency_us": 1755.4285714285713, 00:29:41.060 "max_latency_us": 15541.394285714287 00:29:41.060 } 00:29:41.060 ], 00:29:41.060 "core_count": 1 00:29:41.060 } 00:29:41.060 08:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:41.060 08:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:41.060 08:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:41.060 | .driver_specific 00:29:41.060 | .nvme_error 00:29:41.060 | .status_code 00:29:41.060 | .command_transient_transport_error' 00:29:41.060 08:26:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:41.319 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 222 > 0 )) 00:29:41.319 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1848928 00:29:41.319 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1848928 ']' 00:29:41.319 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1848928 00:29:41.319 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:41.319 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:41.319 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1848928 00:29:41.319 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:41.319 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:41.319 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1848928' 00:29:41.319 killing process with pid 1848928 00:29:41.319 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1848928 00:29:41.319 Received shutdown signal, test time was about 2.000000 seconds 00:29:41.319 00:29:41.319 Latency(us) 00:29:41.319 [2024-11-20T07:26:55.347Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:41.319 [2024-11-20T07:26:55.347Z] =================================================================================================================== 00:29:41.319 [2024-11-20T07:26:55.347Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:41.319 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1848928 00:29:41.578 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:29:41.578 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:29:41.578 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:29:41.578 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:29:41.578 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:29:41.578 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1849616 00:29:41.578 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1849616 /var/tmp/bperf.sock 00:29:41.578 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:29:41.578 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 1849616 ']' 00:29:41.578 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:29:41.578 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:41.578 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:29:41.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:29:41.578 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:41.578 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:41.578 [2024-11-20 08:26:55.398989] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:29:41.578 [2024-11-20 08:26:55.399035] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1849616 ] 00:29:41.578 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:41.578 Zero copy mechanism will not be used. 00:29:41.578 [2024-11-20 08:26:55.472881] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:41.578 [2024-11-20 08:26:55.515227] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:41.837 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:41.837 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:29:41.837 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:41.838 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:29:41.838 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:29:41.838 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:41.838 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:41.838 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:41.838 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:41.838 08:26:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:29:42.096 nvme0n1 00:29:42.096 08:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:29:42.096 08:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:42.096 08:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:42.096 08:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:42.096 08:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:29:42.096 08:26:56 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:29:42.356 I/O size of 131072 is greater than zero copy threshold (65536). 00:29:42.356 Zero copy mechanism will not be used. 00:29:42.356 Running I/O for 2 seconds... 00:29:42.356 [2024-11-20 08:26:56.173471] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.356 [2024-11-20 08:26:56.173568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.356 [2024-11-20 08:26:56.173596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.356 [2024-11-20 08:26:56.179135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.356 [2024-11-20 08:26:56.179481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.356 [2024-11-20 08:26:56.179505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.356 [2024-11-20 08:26:56.185758] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.356 [2024-11-20 08:26:56.186092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.356 [2024-11-20 08:26:56.186114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.356 [2024-11-20 08:26:56.192490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.356 [2024-11-20 08:26:56.192789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.356 [2024-11-20 08:26:56.192809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.356 [2024-11-20 08:26:56.198457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.356 [2024-11-20 08:26:56.198706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.356 [2024-11-20 08:26:56.198727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.356 [2024-11-20 08:26:56.204182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.356 [2024-11-20 08:26:56.204493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.356 [2024-11-20 08:26:56.204515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.356 [2024-11-20 08:26:56.210432] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.356 [2024-11-20 08:26:56.210718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.356 [2024-11-20 08:26:56.210738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.356 [2024-11-20 08:26:56.217488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.356 [2024-11-20 08:26:56.217809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.356 [2024-11-20 08:26:56.217830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.356 [2024-11-20 08:26:56.224133] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.356 [2024-11-20 08:26:56.224440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.356 [2024-11-20 08:26:56.224461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.356 [2024-11-20 08:26:56.230954] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.356 [2024-11-20 08:26:56.231289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.356 [2024-11-20 08:26:56.231310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.356 [2024-11-20 08:26:56.238119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.356 [2024-11-20 08:26:56.238432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.356 [2024-11-20 08:26:56.238454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.356 [2024-11-20 08:26:56.244593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.356 [2024-11-20 08:26:56.244832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.356 [2024-11-20 08:26:56.244853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.356 [2024-11-20 08:26:56.250587] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.356 [2024-11-20 08:26:56.250813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.356 [2024-11-20 08:26:56.250834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.356 [2024-11-20 08:26:56.256869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.356 [2024-11-20 08:26:56.257114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.356 [2024-11-20 08:26:56.257135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.356 [2024-11-20 08:26:56.263178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.263427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.263449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.268922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.269169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.269191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.275763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.276091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.276115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.282361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.282601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.282623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.288487] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.288747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.288767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.293819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.294048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.294068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.298851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.299083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.299104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.303937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.304187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.304214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.309453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.309683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.309704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.313901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.314135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.314156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.318127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.318370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.318391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.322298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.322537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.322557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.327343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.327614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.327634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.333301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.333571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.333592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.338950] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.339237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.339258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.344974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.345270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.345291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.351226] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.351456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.351477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.355857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.356107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.356128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.360181] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.360435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.360456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.364627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.364883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.364902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.369822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.370077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.370096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.357 [2024-11-20 08:26:56.374757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.357 [2024-11-20 08:26:56.375007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.357 [2024-11-20 08:26:56.375027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.379848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.380093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.380114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.385363] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.385610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.385632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.390444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.390683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.390703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.395357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.395605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.395626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.400200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.400455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.400475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.404974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.405230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.405251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.410254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.410487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.410514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.415097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.415349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.415370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.420304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.420537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.420557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.426355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.426628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.426649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.431559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.431809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.431829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.436398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.436639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.436660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.441386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.441610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.441630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.446105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.446369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.446390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.451084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.451206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.451226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.456411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.456712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.456733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.462285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.462526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.462547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.467564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.467809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.467829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.472342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.472604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.472625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.478048] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.478281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.478301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.484175] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.484763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.484784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.491333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.491651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.491672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.497784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.498117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.498141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.504732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.505084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.505105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.512075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.512379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.512400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.519197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.618 [2024-11-20 08:26:56.519453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.618 [2024-11-20 08:26:56.519474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.618 [2024-11-20 08:26:56.526684] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.526926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.526947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.533245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.533587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.533608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.540241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.540545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.540566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.547672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.547901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.547923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.553107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.553352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.553373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.557495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.557741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.557762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.561719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.561963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.561987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.565839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.566086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.566107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.569894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.570132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.570153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.573913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.574163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.574183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.578645] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.579011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.579031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.584513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.584839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.584861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.589880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.590179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.590200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.594978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.595228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.595250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.600266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.600504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.600524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.604478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.604715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.604736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.608622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.608867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.608887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.612699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.612946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.612967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.616800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.617045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.617065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.620974] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.621229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.621250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.625043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.625297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.625317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.629220] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.629459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.629480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.633606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.633858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.633879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.619 [2024-11-20 08:26:56.638726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.619 [2024-11-20 08:26:56.638972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.619 [2024-11-20 08:26:56.638993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.880 [2024-11-20 08:26:56.643916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.880 [2024-11-20 08:26:56.644147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.880 [2024-11-20 08:26:56.644168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.880 [2024-11-20 08:26:56.648683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.880 [2024-11-20 08:26:56.648916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.880 [2024-11-20 08:26:56.648936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.880 [2024-11-20 08:26:56.653076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.880 [2024-11-20 08:26:56.653323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.880 [2024-11-20 08:26:56.653344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.880 [2024-11-20 08:26:56.657407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.880 [2024-11-20 08:26:56.657645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.880 [2024-11-20 08:26:56.657666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.880 [2024-11-20 08:26:56.661847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.880 [2024-11-20 08:26:56.662082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.880 [2024-11-20 08:26:56.662102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.880 [2024-11-20 08:26:56.666261] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.880 [2024-11-20 08:26:56.666513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.880 [2024-11-20 08:26:56.666533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.880 [2024-11-20 08:26:56.670589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.880 [2024-11-20 08:26:56.670830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.880 [2024-11-20 08:26:56.670850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.880 [2024-11-20 08:26:56.674689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.880 [2024-11-20 08:26:56.674922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.880 [2024-11-20 08:26:56.674942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.880 [2024-11-20 08:26:56.679174] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.880 [2024-11-20 08:26:56.679416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.880 [2024-11-20 08:26:56.679441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.880 [2024-11-20 08:26:56.683844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.880 [2024-11-20 08:26:56.684102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.880 [2024-11-20 08:26:56.684123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.880 [2024-11-20 08:26:56.689374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.880 [2024-11-20 08:26:56.689631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.880 [2024-11-20 08:26:56.689652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.880 [2024-11-20 08:26:56.695784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.880 [2024-11-20 08:26:56.696080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.880 [2024-11-20 08:26:56.696102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.880 [2024-11-20 08:26:56.702323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.880 [2024-11-20 08:26:56.702542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.880 [2024-11-20 08:26:56.702563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.880 [2024-11-20 08:26:56.709855] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.880 [2024-11-20 08:26:56.710142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.880 [2024-11-20 08:26:56.710164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.880 [2024-11-20 08:26:56.716651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.880 [2024-11-20 08:26:56.716853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.880 [2024-11-20 08:26:56.716874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.722190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.722438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.722459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.726987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.727242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.727262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.731860] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.732102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.732122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.736270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.736521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.736541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.740726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.740975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.740995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.744827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.745068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.745090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.748907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.749147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.749167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.753135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.753376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.753396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.757191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.757485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.757506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.761847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.762097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.762117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.767725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.768039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.768060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.773015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.773300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.773322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.778221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.778464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.778484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.783053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.783295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.783316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.787796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.788049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.788069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.792613] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.792857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.792879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.797856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.798127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.798148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.802879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.803120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.803141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.807821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.808066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.808086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.812889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.813149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.813173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.817811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.818049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.818069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.822999] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.823263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.823283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.828104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.828379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.828399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.833270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.833502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.833522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.838734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.838985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.839005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.844961] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.845210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.881 [2024-11-20 08:26:56.845231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.881 [2024-11-20 08:26:56.849922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.881 [2024-11-20 08:26:56.850145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.882 [2024-11-20 08:26:56.850166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.882 [2024-11-20 08:26:56.855478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.882 [2024-11-20 08:26:56.855720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.882 [2024-11-20 08:26:56.855740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.882 [2024-11-20 08:26:56.860797] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.882 [2024-11-20 08:26:56.861076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.882 [2024-11-20 08:26:56.861096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.882 [2024-11-20 08:26:56.866374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.882 [2024-11-20 08:26:56.866615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.882 [2024-11-20 08:26:56.866636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.882 [2024-11-20 08:26:56.870829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.882 [2024-11-20 08:26:56.871071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.882 [2024-11-20 08:26:56.871091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.882 [2024-11-20 08:26:56.875475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.882 [2024-11-20 08:26:56.875724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.882 [2024-11-20 08:26:56.875745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.882 [2024-11-20 08:26:56.880414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.882 [2024-11-20 08:26:56.880655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.882 [2024-11-20 08:26:56.880675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.882 [2024-11-20 08:26:56.884896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.882 [2024-11-20 08:26:56.885136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.882 [2024-11-20 08:26:56.885156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:42.882 [2024-11-20 08:26:56.889289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.882 [2024-11-20 08:26:56.889532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.882 [2024-11-20 08:26:56.889552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:42.882 [2024-11-20 08:26:56.893585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.882 [2024-11-20 08:26:56.893833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.882 [2024-11-20 08:26:56.893853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:42.882 [2024-11-20 08:26:56.897716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.882 [2024-11-20 08:26:56.897976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.882 [2024-11-20 08:26:56.897996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:42.882 [2024-11-20 08:26:56.902098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:42.882 [2024-11-20 08:26:56.902346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:42.882 [2024-11-20 08:26:56.902366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.906457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.906707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.906727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.910648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.910891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.910911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.914779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.915027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.915047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.918972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.919226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.919246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.923118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.923362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.923383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.927237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.927477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.927498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.931294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.931547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.931582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.935437] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.935682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.935707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.939628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.939885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.939906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.943681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.943927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.943947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.947899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.948134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.948155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.952139] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.952388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.952408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.956248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.956498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.956519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.960237] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.960486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.960506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.964452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.964696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.964716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.968721] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.968970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.968991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.972919] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.973170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.973190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.977082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.977315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.977335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.981183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.981422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.981442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.985301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.985532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.985552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.989358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.989593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.989613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.993602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.993843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.993863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:56.997843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:56.998084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:56.998104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.143 [2024-11-20 08:26:57.002332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.143 [2024-11-20 08:26:57.002583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.143 [2024-11-20 08:26:57.002603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.006964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.007216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.007236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.012608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.012873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.012894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.018529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.018773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.018793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.024738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.024980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.025001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.031361] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.031656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.031676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.038042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.038336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.038356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.044080] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.044325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.044345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.049050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.049298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.049318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.054287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.054573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.054593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.060235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.060531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.060555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.066516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.066791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.066811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.072965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.073287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.073307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.078900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.079239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.079260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.085339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.085662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.085683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.091723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.092050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.092070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.097975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.098277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.098297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.104161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.104496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.104517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.110280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.110572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.110592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.116529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.116844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.116864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.122612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.122922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.122942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.129143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.129462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.129481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.135539] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.135854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.135874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.141633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.141936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.141957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.147665] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.148008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.148028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.153734] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.154082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.154102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.144 [2024-11-20 08:26:57.159937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.144 [2024-11-20 08:26:57.160249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.144 [2024-11-20 08:26:57.160269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.405 [2024-11-20 08:26:57.166092] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.405 [2024-11-20 08:26:57.166355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.405 [2024-11-20 08:26:57.166377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.405 5875.00 IOPS, 734.38 MiB/s [2024-11-20T07:26:57.433Z] [2024-11-20 08:26:57.173071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.405 [2024-11-20 08:26:57.173359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.405 [2024-11-20 08:26:57.173381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.405 [2024-11-20 08:26:57.178015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.405 [2024-11-20 08:26:57.178258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.405 [2024-11-20 08:26:57.178277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.405 [2024-11-20 08:26:57.183988] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.405 [2024-11-20 08:26:57.184247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.405 [2024-11-20 08:26:57.184268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.405 [2024-11-20 08:26:57.189839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.405 [2024-11-20 08:26:57.190088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.405 [2024-11-20 08:26:57.190109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.405 [2024-11-20 08:26:57.194823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.405 [2024-11-20 08:26:57.195056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.405 [2024-11-20 08:26:57.195077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.405 [2024-11-20 08:26:57.199647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.405 [2024-11-20 08:26:57.199878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.405 [2024-11-20 08:26:57.199898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.405 [2024-11-20 08:26:57.204520] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.405 [2024-11-20 08:26:57.204781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.405 [2024-11-20 08:26:57.204801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.405 [2024-11-20 08:26:57.209166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.405 [2024-11-20 08:26:57.209414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.405 [2024-11-20 08:26:57.209435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.405 [2024-11-20 08:26:57.213936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.405 [2024-11-20 08:26:57.214182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.405 [2024-11-20 08:26:57.214214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.405 [2024-11-20 08:26:57.219123] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.405 [2024-11-20 08:26:57.219393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.405 [2024-11-20 08:26:57.219413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.405 [2024-11-20 08:26:57.224411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.405 [2024-11-20 08:26:57.224646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.405 [2024-11-20 08:26:57.224666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.405 [2024-11-20 08:26:57.229357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.405 [2024-11-20 08:26:57.229587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.405 [2024-11-20 08:26:57.229606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.405 [2024-11-20 08:26:57.234339] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.405 [2024-11-20 08:26:57.234557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.405 [2024-11-20 08:26:57.234577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.405 [2024-11-20 08:26:57.239023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.405 [2024-11-20 08:26:57.239280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.405 [2024-11-20 08:26:57.239300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.405 [2024-11-20 08:26:57.243784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.405 [2024-11-20 08:26:57.244032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.405 [2024-11-20 08:26:57.244052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.405 [2024-11-20 08:26:57.248266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.405 [2024-11-20 08:26:57.248506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.405 [2024-11-20 08:26:57.248526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.405 [2024-11-20 08:26:57.252962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.405 [2024-11-20 08:26:57.253218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.405 [2024-11-20 08:26:57.253238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.405 [2024-11-20 08:26:57.258581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.405 [2024-11-20 08:26:57.258810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.405 [2024-11-20 08:26:57.258830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.265027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.265387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.265409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.271907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.272223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.272244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.278834] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.279187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.279213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.286039] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.286338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.286359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.292831] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.293155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.293175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.298893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.299129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.299149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.304462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.304730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.304751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.309291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.309538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.309557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.313786] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.314033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.314053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.318093] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.318337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.318357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.322247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.322487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.322507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.326397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.326664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.326685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.330602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.330841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.330861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.334714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.334955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.334975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.338882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.339133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.339153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.343013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.343281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.343301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.347088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.347334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.347358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.351106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.351364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.351385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.355127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.355396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.355416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.359545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.359798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.359817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.363929] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.364182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.364208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.368304] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.368557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.368578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.372762] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.373008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.373028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.377110] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.377363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.377383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.381521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.381765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.381784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.385933] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.386189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.386215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.406 [2024-11-20 08:26:57.390382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.406 [2024-11-20 08:26:57.390639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.406 [2024-11-20 08:26:57.390658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.407 [2024-11-20 08:26:57.394836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.407 [2024-11-20 08:26:57.395081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.407 [2024-11-20 08:26:57.395101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.407 [2024-11-20 08:26:57.399299] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.407 [2024-11-20 08:26:57.399547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.407 [2024-11-20 08:26:57.399567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.407 [2024-11-20 08:26:57.403576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.407 [2024-11-20 08:26:57.403827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.407 [2024-11-20 08:26:57.403847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.407 [2024-11-20 08:26:57.407880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.407 [2024-11-20 08:26:57.408123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.407 [2024-11-20 08:26:57.408144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.407 [2024-11-20 08:26:57.412246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.407 [2024-11-20 08:26:57.412491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.407 [2024-11-20 08:26:57.412511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.407 [2024-11-20 08:26:57.416633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.407 [2024-11-20 08:26:57.416866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.407 [2024-11-20 08:26:57.416886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.407 [2024-11-20 08:26:57.421106] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.407 [2024-11-20 08:26:57.421361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.407 [2024-11-20 08:26:57.421381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.407 [2024-11-20 08:26:57.425501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.407 [2024-11-20 08:26:57.425738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.407 [2024-11-20 08:26:57.425758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.667 [2024-11-20 08:26:57.430445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.667 [2024-11-20 08:26:57.430687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.667 [2024-11-20 08:26:57.430707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.667 [2024-11-20 08:26:57.436443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.667 [2024-11-20 08:26:57.436679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.667 [2024-11-20 08:26:57.436697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.667 [2024-11-20 08:26:57.443135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.667 [2024-11-20 08:26:57.443380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.667 [2024-11-20 08:26:57.443401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.667 [2024-11-20 08:26:57.448668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.667 [2024-11-20 08:26:57.448920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.667 [2024-11-20 08:26:57.448940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.667 [2024-11-20 08:26:57.454104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.667 [2024-11-20 08:26:57.454369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.667 [2024-11-20 08:26:57.454390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.667 [2024-11-20 08:26:57.459292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.667 [2024-11-20 08:26:57.459551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.667 [2024-11-20 08:26:57.459571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.667 [2024-11-20 08:26:57.464625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.667 [2024-11-20 08:26:57.464859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.667 [2024-11-20 08:26:57.464880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.667 [2024-11-20 08:26:57.470731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.667 [2024-11-20 08:26:57.470963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.667 [2024-11-20 08:26:57.470987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.667 [2024-11-20 08:26:57.475620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.667 [2024-11-20 08:26:57.475860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.667 [2024-11-20 08:26:57.475880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.667 [2024-11-20 08:26:57.480251] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.667 [2024-11-20 08:26:57.480490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.667 [2024-11-20 08:26:57.480510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.667 [2024-11-20 08:26:57.485425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.667 [2024-11-20 08:26:57.485661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.485681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.490822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.491115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.491135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.497592] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.497883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.497903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.504331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.504589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.504609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.510582] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.510843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.510863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.516894] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.517138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.517158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.521759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.522003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.522023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.526506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.526749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.526769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.530889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.531141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.531161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.535026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.535281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.535302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.539140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.539386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.539406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.543266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.543510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.543531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.547630] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.547877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.547898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.551751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.551998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.552018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.555872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.556107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.556127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.560022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.560275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.560295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.564572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.564817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.564837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.569430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.569674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.569694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.574593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.574826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.574847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.579311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.579561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.579581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.584283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.584517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.584537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.588914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.589155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.589175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.594006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.594258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.594278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.598467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.598711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.598735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.602871] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.603111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.603131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.607136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.607395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.607416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.611330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.611592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.668 [2024-11-20 08:26:57.611612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.668 [2024-11-20 08:26:57.615664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.668 [2024-11-20 08:26:57.615907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.669 [2024-11-20 08:26:57.615927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.669 [2024-11-20 08:26:57.620032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.669 [2024-11-20 08:26:57.620291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.669 [2024-11-20 08:26:57.620311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.669 [2024-11-20 08:26:57.624303] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.669 [2024-11-20 08:26:57.624557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.669 [2024-11-20 08:26:57.624576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.669 [2024-11-20 08:26:57.628578] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.669 [2024-11-20 08:26:57.628826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.669 [2024-11-20 08:26:57.628847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.669 [2024-11-20 08:26:57.633060] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.669 [2024-11-20 08:26:57.633297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.669 [2024-11-20 08:26:57.633317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.669 [2024-11-20 08:26:57.637366] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.669 [2024-11-20 08:26:57.637605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.669 [2024-11-20 08:26:57.637625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.669 [2024-11-20 08:26:57.641588] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.669 [2024-11-20 08:26:57.641843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.669 [2024-11-20 08:26:57.641863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.669 [2024-11-20 08:26:57.645819] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.669 [2024-11-20 08:26:57.646066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.669 [2024-11-20 08:26:57.646088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.669 [2024-11-20 08:26:57.650051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.669 [2024-11-20 08:26:57.650309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.669 [2024-11-20 08:26:57.650330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.669 [2024-11-20 08:26:57.654409] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.669 [2024-11-20 08:26:57.654665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.669 [2024-11-20 08:26:57.654685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.669 [2024-11-20 08:26:57.659307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.669 [2024-11-20 08:26:57.659554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.669 [2024-11-20 08:26:57.659574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.669 [2024-11-20 08:26:57.664188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.669 [2024-11-20 08:26:57.664442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.669 [2024-11-20 08:26:57.664462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.669 [2024-11-20 08:26:57.668659] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.669 [2024-11-20 08:26:57.668905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.669 [2024-11-20 08:26:57.668925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.669 [2024-11-20 08:26:57.673063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.669 [2024-11-20 08:26:57.673319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.669 [2024-11-20 08:26:57.673339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.669 [2024-11-20 08:26:57.677544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.669 [2024-11-20 08:26:57.677798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.669 [2024-11-20 08:26:57.677818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.669 [2024-11-20 08:26:57.681766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.669 [2024-11-20 08:26:57.682019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.669 [2024-11-20 08:26:57.682039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.669 [2024-11-20 08:26:57.685928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.669 [2024-11-20 08:26:57.686160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.669 [2024-11-20 08:26:57.686179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.930 [2024-11-20 08:26:57.690259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.930 [2024-11-20 08:26:57.690511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.930 [2024-11-20 08:26:57.690532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.930 [2024-11-20 08:26:57.694880] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.930 [2024-11-20 08:26:57.695131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.930 [2024-11-20 08:26:57.695152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.930 [2024-11-20 08:26:57.699815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.930 [2024-11-20 08:26:57.700044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.930 [2024-11-20 08:26:57.700065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.930 [2024-11-20 08:26:57.704693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.930 [2024-11-20 08:26:57.704932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.930 [2024-11-20 08:26:57.704952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.930 [2024-11-20 08:26:57.709098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.930 [2024-11-20 08:26:57.709344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.930 [2024-11-20 08:26:57.709364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.930 [2024-11-20 08:26:57.713414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.930 [2024-11-20 08:26:57.713650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.930 [2024-11-20 08:26:57.713673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.930 [2024-11-20 08:26:57.717785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.930 [2024-11-20 08:26:57.718037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.930 [2024-11-20 08:26:57.718057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.930 [2024-11-20 08:26:57.722096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.930 [2024-11-20 08:26:57.722346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.930 [2024-11-20 08:26:57.722366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.930 [2024-11-20 08:26:57.726488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.930 [2024-11-20 08:26:57.726734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.930 [2024-11-20 08:26:57.726754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.930 [2024-11-20 08:26:57.731043] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.930 [2024-11-20 08:26:57.731282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.930 [2024-11-20 08:26:57.731302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.930 [2024-11-20 08:26:57.735273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.930 [2024-11-20 08:26:57.735528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.930 [2024-11-20 08:26:57.735549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.930 [2024-11-20 08:26:57.739619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.930 [2024-11-20 08:26:57.739875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.930 [2024-11-20 08:26:57.739895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.930 [2024-11-20 08:26:57.743872] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.930 [2024-11-20 08:26:57.744113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.930 [2024-11-20 08:26:57.744133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.930 [2024-11-20 08:26:57.748068] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.930 [2024-11-20 08:26:57.748315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.930 [2024-11-20 08:26:57.748335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.930 [2024-11-20 08:26:57.752440] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.930 [2024-11-20 08:26:57.752687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.930 [2024-11-20 08:26:57.752707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.930 [2024-11-20 08:26:57.757343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.930 [2024-11-20 08:26:57.757588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.757608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.762281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.762530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.762549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.767358] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.767605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.767625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.772219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.772446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.772465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.776939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.777190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.777216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.781378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.781618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.781638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.785893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.786133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.786153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.790888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.791140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.791160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.795888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.796128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.796148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.800364] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.800608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.800629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.804743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.804980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.805001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.809263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.809511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.809531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.814214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.814456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.814476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.818724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.818968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.818988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.823167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.823408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.823427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.827661] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.827894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.827914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.832159] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.832416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.832440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.836607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.836846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.836866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.840850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.841099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.841119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.844968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.845212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.845232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.849654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.849905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.849925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.855805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.856055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.856076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.861692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.861944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.861965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.868970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.869225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.869246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.876173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.876430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.876451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.881284] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.881551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.881572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.886051] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.886306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.931 [2024-11-20 08:26:57.886326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.931 [2024-11-20 08:26:57.890631] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.931 [2024-11-20 08:26:57.890880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.932 [2024-11-20 08:26:57.890900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.932 [2024-11-20 08:26:57.895269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.932 [2024-11-20 08:26:57.895518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.932 [2024-11-20 08:26:57.895539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.932 [2024-11-20 08:26:57.899708] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.932 [2024-11-20 08:26:57.899959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.932 [2024-11-20 08:26:57.899980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.932 [2024-11-20 08:26:57.904354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.932 [2024-11-20 08:26:57.904603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.932 [2024-11-20 08:26:57.904625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.932 [2024-11-20 08:26:57.909499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.932 [2024-11-20 08:26:57.909652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.932 [2024-11-20 08:26:57.909673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.932 [2024-11-20 08:26:57.915643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.932 [2024-11-20 08:26:57.915893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.932 [2024-11-20 08:26:57.915914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.932 [2024-11-20 08:26:57.922095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.932 [2024-11-20 08:26:57.922351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.932 [2024-11-20 08:26:57.922372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:43.932 [2024-11-20 08:26:57.929235] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.932 [2024-11-20 08:26:57.929481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.932 [2024-11-20 08:26:57.929503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:43.932 [2024-11-20 08:26:57.936703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.932 [2024-11-20 08:26:57.936951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.932 [2024-11-20 08:26:57.936972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:43.932 [2024-11-20 08:26:57.943833] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.932 [2024-11-20 08:26:57.943930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.932 [2024-11-20 08:26:57.943964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:43.932 [2024-11-20 08:26:57.951165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:43.932 [2024-11-20 08:26:57.951429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:43.932 [2024-11-20 08:26:57.951451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.192 [2024-11-20 08:26:57.958750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.192 [2024-11-20 08:26:57.959022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.192 [2024-11-20 08:26:57.959043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.192 [2024-11-20 08:26:57.966294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.192 [2024-11-20 08:26:57.966474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.192 [2024-11-20 08:26:57.966495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.192 [2024-11-20 08:26:57.973789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.192 [2024-11-20 08:26:57.974038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.192 [2024-11-20 08:26:57.974059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.192 [2024-11-20 08:26:57.981059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.192 [2024-11-20 08:26:57.981159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.192 [2024-11-20 08:26:57.981177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.192 [2024-11-20 08:26:57.989597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.192 [2024-11-20 08:26:57.989847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.192 [2024-11-20 08:26:57.989873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.192 [2024-11-20 08:26:57.997285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.192 [2024-11-20 08:26:57.997531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.192 [2024-11-20 08:26:57.997552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.192 [2024-11-20 08:26:58.004326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.192 [2024-11-20 08:26:58.004580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.192 [2024-11-20 08:26:58.004601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.192 [2024-11-20 08:26:58.011986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.192 [2024-11-20 08:26:58.012250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.192 [2024-11-20 08:26:58.012271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.192 [2024-11-20 08:26:58.018899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.192 [2024-11-20 08:26:58.019146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.192 [2024-11-20 08:26:58.019166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.192 [2024-11-20 08:26:58.025945] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.192 [2024-11-20 08:26:58.026195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.192 [2024-11-20 08:26:58.026223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.192 [2024-11-20 08:26:58.032890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.192 [2024-11-20 08:26:58.033137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.192 [2024-11-20 08:26:58.033158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.192 [2024-11-20 08:26:58.040009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.192 [2024-11-20 08:26:58.040276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.192 [2024-11-20 08:26:58.040298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.192 [2024-11-20 08:26:58.047007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.192 [2024-11-20 08:26:58.047262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.192 [2024-11-20 08:26:58.047284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.192 [2024-11-20 08:26:58.054705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.192 [2024-11-20 08:26:58.054803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.192 [2024-11-20 08:26:58.054821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.192 [2024-11-20 08:26:58.061813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.192 [2024-11-20 08:26:58.062075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.192 [2024-11-20 08:26:58.062096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.192 [2024-11-20 08:26:58.066969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.067223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.067243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.193 [2024-11-20 08:26:58.071653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.071899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.071920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.193 [2024-11-20 08:26:58.076444] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.076691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.076712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.193 [2024-11-20 08:26:58.081000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.081246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.081267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.193 [2024-11-20 08:26:58.086429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.086677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.086698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.193 [2024-11-20 08:26:58.091408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.091659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.091679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.193 [2024-11-20 08:26:58.096172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.096428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.096449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.193 [2024-11-20 08:26:58.101753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.102005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.102025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.193 [2024-11-20 08:26:58.106829] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.107077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.107098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.193 [2024-11-20 08:26:58.111713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.111978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.111999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.193 [2024-11-20 08:26:58.116632] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.116881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.116901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.193 [2024-11-20 08:26:58.121506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.121745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.121765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.193 [2024-11-20 08:26:58.127161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.127443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.127464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.193 [2024-11-20 08:26:58.133500] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.133758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.133778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.193 [2024-11-20 08:26:58.139276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.139528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.139549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.193 [2024-11-20 08:26:58.144430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.144695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.144719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.193 [2024-11-20 08:26:58.149589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.149837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.149858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.193 [2024-11-20 08:26:58.154908] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.155156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.155176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:29:44.193 [2024-11-20 08:26:58.159873] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.160136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.160157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:29:44.193 [2024-11-20 08:26:58.164606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.164851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.164872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:29:44.193 [2024-11-20 08:26:58.169795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2439980) with pdu=0x2000166ff3c8 00:29:44.193 [2024-11-20 08:26:58.170043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:44.193 [2024-11-20 08:26:58.170064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:29:44.193 5975.00 IOPS, 746.88 MiB/s 00:29:44.193 Latency(us) 00:29:44.193 [2024-11-20T07:26:58.221Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.193 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:29:44.193 nvme0n1 : 2.00 5972.33 746.54 0.00 0.00 2674.76 1903.66 9299.87 00:29:44.193 [2024-11-20T07:26:58.221Z] =================================================================================================================== 00:29:44.193 [2024-11-20T07:26:58.221Z] Total : 5972.33 746.54 0.00 0.00 2674.76 1903.66 9299.87 00:29:44.193 { 00:29:44.193 "results": [ 00:29:44.193 { 00:29:44.193 "job": "nvme0n1", 00:29:44.193 "core_mask": "0x2", 00:29:44.193 "workload": "randwrite", 00:29:44.193 "status": "finished", 00:29:44.193 "queue_depth": 16, 00:29:44.193 "io_size": 131072, 00:29:44.193 "runtime": 2.003572, 00:29:44.193 "iops": 5972.3334125252295, 00:29:44.193 "mibps": 746.5416765656537, 00:29:44.193 "io_failed": 0, 00:29:44.193 "io_timeout": 0, 00:29:44.193 "avg_latency_us": 2674.760405275264, 00:29:44.193 "min_latency_us": 1903.664761904762, 00:29:44.193 "max_latency_us": 9299.870476190476 00:29:44.193 } 00:29:44.193 ], 00:29:44.193 "core_count": 1 00:29:44.193 } 00:29:44.193 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:29:44.193 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:29:44.193 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:29:44.193 | .driver_specific 00:29:44.193 | .nvme_error 00:29:44.193 | .status_code 00:29:44.193 | .command_transient_transport_error' 00:29:44.193 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:29:44.453 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 386 > 0 )) 00:29:44.453 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1849616 00:29:44.453 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1849616 ']' 00:29:44.453 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1849616 00:29:44.453 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:44.453 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.453 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1849616 00:29:44.453 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:44.453 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:44.453 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1849616' 00:29:44.453 killing process with pid 1849616 00:29:44.453 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1849616 00:29:44.453 Received shutdown signal, test time was about 2.000000 seconds 00:29:44.453 00:29:44.453 Latency(us) 00:29:44.453 [2024-11-20T07:26:58.481Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:44.453 [2024-11-20T07:26:58.481Z] =================================================================================================================== 00:29:44.453 [2024-11-20T07:26:58.481Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:44.453 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1849616 00:29:44.712 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1847796 00:29:44.712 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 1847796 ']' 00:29:44.712 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 1847796 00:29:44.712 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:29:44.712 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:44.712 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1847796 00:29:44.712 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:44.712 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:44.712 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1847796' 00:29:44.712 killing process with pid 1847796 00:29:44.712 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 1847796 00:29:44.712 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 1847796 00:29:44.972 00:29:44.972 real 0m13.885s 00:29:44.972 user 0m26.598s 00:29:44.972 sys 0m4.435s 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:29:44.972 ************************************ 00:29:44.972 END TEST nvmf_digest_error 00:29:44.972 ************************************ 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@99 -- # sync 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@102 -- # set +e 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:44.972 rmmod nvme_tcp 00:29:44.972 rmmod nvme_fabrics 00:29:44.972 rmmod nvme_keyring 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@106 -- # set -e 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@107 -- # return 0 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # '[' -n 1847796 ']' 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@337 -- # killprocess 1847796 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 1847796 ']' 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 1847796 00:29:44.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1847796) - No such process 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 1847796 is not found' 00:29:44.972 Process with pid 1847796 is not found 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # nvmf_fini 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@254 -- # local dev 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@257 -- # remove_target_ns 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:44.972 08:26:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:47.513 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@258 -- # delete_main_bridge 00:29:47.513 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:47.513 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@121 -- # return 0 00:29:47.513 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:47.513 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:47.513 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:47.513 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:29:47.513 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:29:47.513 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:47.513 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:29:47.513 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:29:47.513 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:29:47.513 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:47.513 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:29:47.513 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:29:47.513 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:29:47.513 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:29:47.513 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:29:47.513 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:29:47.513 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:29:47.514 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # _dev=0 00:29:47.514 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # dev_map=() 00:29:47.514 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@274 -- # iptr 00:29:47.514 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # iptables-save 00:29:47.514 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:29:47.514 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@548 -- # iptables-restore 00:29:47.514 00:29:47.514 real 0m36.209s 00:29:47.514 user 0m54.936s 00:29:47.514 sys 0m13.610s 00:29:47.514 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:47.514 08:27:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:29:47.514 ************************************ 00:29:47.514 END TEST nvmf_digest 00:29:47.514 ************************************ 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:47.514 ************************************ 00:29:47.514 START TEST nvmf_bdevperf 00:29:47.514 ************************************ 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:29:47.514 * Looking for test storage... 00:29:47.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:47.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.514 --rc genhtml_branch_coverage=1 00:29:47.514 --rc genhtml_function_coverage=1 00:29:47.514 --rc genhtml_legend=1 00:29:47.514 --rc geninfo_all_blocks=1 00:29:47.514 --rc geninfo_unexecuted_blocks=1 00:29:47.514 00:29:47.514 ' 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:47.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.514 --rc genhtml_branch_coverage=1 00:29:47.514 --rc genhtml_function_coverage=1 00:29:47.514 --rc genhtml_legend=1 00:29:47.514 --rc geninfo_all_blocks=1 00:29:47.514 --rc geninfo_unexecuted_blocks=1 00:29:47.514 00:29:47.514 ' 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:47.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.514 --rc genhtml_branch_coverage=1 00:29:47.514 --rc genhtml_function_coverage=1 00:29:47.514 --rc genhtml_legend=1 00:29:47.514 --rc geninfo_all_blocks=1 00:29:47.514 --rc geninfo_unexecuted_blocks=1 00:29:47.514 00:29:47.514 ' 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:47.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:47.514 --rc genhtml_branch_coverage=1 00:29:47.514 --rc genhtml_function_coverage=1 00:29:47.514 --rc genhtml_legend=1 00:29:47.514 --rc geninfo_all_blocks=1 00:29:47.514 --rc geninfo_unexecuted_blocks=1 00:29:47.514 00:29:47.514 ' 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.514 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@50 -- # : 0 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:29:47.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # remove_target_ns 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # xtrace_disable 00:29:47.515 08:27:01 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@131 -- # pci_devs=() 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@135 -- # net_devs=() 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@136 -- # e810=() 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@136 -- # local -ga e810 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@137 -- # x722=() 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@137 -- # local -ga x722 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@138 -- # mlx=() 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@138 -- # local -ga mlx 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:54.089 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:54.089 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:54.089 Found net devices under 0000:86:00.0: cvl_0_0 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:54.089 Found net devices under 0000:86:00.1: cvl_0_1 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # is_hw=yes 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@247 -- # create_target_ns 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@28 -- # local -g _dev 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@44 -- # ips=() 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:29:54.089 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:29:54.090 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:29:54.090 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:29:54.090 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:29:54.090 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:29:54.090 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:29:54.090 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:29:54.090 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:29:54.090 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:29:54.090 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:54.090 08:27:06 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@11 -- # local val=167772161 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:54.090 10.0.0.1 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@11 -- # local val=167772162 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:54.090 10.0.0.2 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:54.090 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:54.090 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.470 ms 00:29:54.090 00:29:54.090 --- 10.0.0.1 ping statistics --- 00:29:54.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.090 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=target0 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:29:54.090 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:54.090 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:29:54.090 00:29:54.090 --- 10.0.0.2 ping statistics --- 00:29:54.090 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:54.090 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # (( pair++ )) 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # return 0 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:54.090 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=initiator0 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=initiator1 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # return 1 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev= 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@160 -- # return 0 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev target0 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=target0 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # get_net_dev target1 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # local dev=target1 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # return 1 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@159 -- # dev= 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@160 -- # return 0 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:29:54.091 ' 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # nvmfpid=1854158 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # waitforlisten 1854158 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1854158 ']' 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:54.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:54.091 [2024-11-20 08:27:07.393181] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:29:54.091 [2024-11-20 08:27:07.393231] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:54.091 [2024-11-20 08:27:07.468479] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:54.091 [2024-11-20 08:27:07.511154] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:54.091 [2024-11-20 08:27:07.511188] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:54.091 [2024-11-20 08:27:07.511195] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:54.091 [2024-11-20 08:27:07.511206] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:54.091 [2024-11-20 08:27:07.511211] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:54.091 [2024-11-20 08:27:07.512494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:54.091 [2024-11-20 08:27:07.512583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:54.091 [2024-11-20 08:27:07.512582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:54.091 [2024-11-20 08:27:07.647259] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:54.091 Malloc0 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:54.091 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.092 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:54.092 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.092 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:54.092 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.092 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:54.092 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.092 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:54.092 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:54.092 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:29:54.092 [2024-11-20 08:27:07.713166] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:54.092 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:54.092 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:29:54.092 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:29:54.092 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # config=() 00:29:54.092 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # local subsystem config 00:29:54.092 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:29:54.092 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:29:54.092 { 00:29:54.092 "params": { 00:29:54.092 "name": "Nvme$subsystem", 00:29:54.092 "trtype": "$TEST_TRANSPORT", 00:29:54.092 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:54.092 "adrfam": "ipv4", 00:29:54.092 "trsvcid": "$NVMF_PORT", 00:29:54.092 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:54.092 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:54.092 "hdgst": ${hdgst:-false}, 00:29:54.092 "ddgst": ${ddgst:-false} 00:29:54.092 }, 00:29:54.092 "method": "bdev_nvme_attach_controller" 00:29:54.092 } 00:29:54.092 EOF 00:29:54.092 )") 00:29:54.092 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # cat 00:29:54.092 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # jq . 00:29:54.092 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@397 -- # IFS=, 00:29:54.092 08:27:07 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:29:54.092 "params": { 00:29:54.092 "name": "Nvme1", 00:29:54.092 "trtype": "tcp", 00:29:54.092 "traddr": "10.0.0.2", 00:29:54.092 "adrfam": "ipv4", 00:29:54.092 "trsvcid": "4420", 00:29:54.092 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:54.092 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:54.092 "hdgst": false, 00:29:54.092 "ddgst": false 00:29:54.092 }, 00:29:54.092 "method": "bdev_nvme_attach_controller" 00:29:54.092 }' 00:29:54.092 [2024-11-20 08:27:07.762900] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:29:54.092 [2024-11-20 08:27:07.762943] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1854188 ] 00:29:54.092 [2024-11-20 08:27:07.839097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:54.092 [2024-11-20 08:27:07.880032] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:54.092 Running I/O for 1 seconds... 00:29:55.471 11451.00 IOPS, 44.73 MiB/s 00:29:55.471 Latency(us) 00:29:55.471 [2024-11-20T07:27:09.499Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:55.471 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:29:55.471 Verification LBA range: start 0x0 length 0x4000 00:29:55.471 Nvme1n1 : 1.01 11518.83 45.00 0.00 0.00 11068.09 2246.95 11671.65 00:29:55.471 [2024-11-20T07:27:09.499Z] =================================================================================================================== 00:29:55.471 [2024-11-20T07:27:09.499Z] Total : 11518.83 45.00 0.00 0.00 11068.09 2246.95 11671.65 00:29:55.471 08:27:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1854428 00:29:55.471 08:27:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:29:55.471 08:27:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:29:55.471 08:27:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:29:55.471 08:27:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # config=() 00:29:55.471 08:27:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # local subsystem config 00:29:55.471 08:27:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:29:55.471 08:27:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:29:55.471 { 00:29:55.471 "params": { 00:29:55.471 "name": "Nvme$subsystem", 00:29:55.471 "trtype": "$TEST_TRANSPORT", 00:29:55.471 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:55.471 "adrfam": "ipv4", 00:29:55.471 "trsvcid": "$NVMF_PORT", 00:29:55.471 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:55.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:55.472 "hdgst": ${hdgst:-false}, 00:29:55.472 "ddgst": ${ddgst:-false} 00:29:55.472 }, 00:29:55.472 "method": "bdev_nvme_attach_controller" 00:29:55.472 } 00:29:55.472 EOF 00:29:55.472 )") 00:29:55.472 08:27:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # cat 00:29:55.472 08:27:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # jq . 00:29:55.472 08:27:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@397 -- # IFS=, 00:29:55.472 08:27:09 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:29:55.472 "params": { 00:29:55.472 "name": "Nvme1", 00:29:55.472 "trtype": "tcp", 00:29:55.472 "traddr": "10.0.0.2", 00:29:55.472 "adrfam": "ipv4", 00:29:55.472 "trsvcid": "4420", 00:29:55.472 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:55.472 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:55.472 "hdgst": false, 00:29:55.472 "ddgst": false 00:29:55.472 }, 00:29:55.472 "method": "bdev_nvme_attach_controller" 00:29:55.472 }' 00:29:55.472 [2024-11-20 08:27:09.300922] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:29:55.472 [2024-11-20 08:27:09.300970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1854428 ] 00:29:55.472 [2024-11-20 08:27:09.374581] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:55.472 [2024-11-20 08:27:09.414976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.730 Running I/O for 15 seconds... 00:29:57.602 11422.00 IOPS, 44.62 MiB/s [2024-11-20T07:27:12.640Z] 11524.00 IOPS, 45.02 MiB/s [2024-11-20T07:27:12.640Z] 08:27:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1854158 00:29:58.612 08:27:12 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:29:58.612 [2024-11-20 08:27:12.277472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:113624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.612 [2024-11-20 08:27:12.277510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:113632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:113648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:113656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:113664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:113672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:113696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:113712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:113720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:113728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:113752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:113760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:113784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:113800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:113808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:113816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:113824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:113832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:113840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.277991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:113848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.277998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.278007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:113856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.278016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.278026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:113864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.278033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.278042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.278050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.278058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:113880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.278065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.278074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:113888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.278084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.278093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:113896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.278103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.278112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:113904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.278121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.278131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.613 [2024-11-20 08:27:12.278144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.613 [2024-11-20 08:27:12.278156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:113928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:113936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:113960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:113352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.614 [2024-11-20 08:27:12.278725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.614 [2024-11-20 08:27:12.278791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.614 [2024-11-20 08:27:12.278798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.278808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-20 08:27:12.278815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.278823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-20 08:27:12.278830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.278838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-20 08:27:12.278844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.278853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-20 08:27:12.278861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.278870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-20 08:27:12.278876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.278883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-20 08:27:12.278890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.278898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-20 08:27:12.278905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.278913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-20 08:27:12.278920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.278928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-20 08:27:12.278934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.278942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-20 08:27:12.278948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.278956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-20 08:27:12.278962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.278971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-20 08:27:12.278978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.278986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-20 08:27:12.278995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-20 08:27:12.279009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-20 08:27:12.279025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-20 08:27:12.279039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-20 08:27:12.279053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:113368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:113384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:113392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:113400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:113408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:113424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:113432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:113440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:113456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:113472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:29:58.615 [2024-11-20 08:27:12.279296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:113480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:113488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:113512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:113520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.615 [2024-11-20 08:27:12.279387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.615 [2024-11-20 08:27:12.279395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-20 08:27:12.279401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.616 [2024-11-20 08:27:12.279409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-20 08:27:12.279416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.616 [2024-11-20 08:27:12.279425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-20 08:27:12.279431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.616 [2024-11-20 08:27:12.279439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:113552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-20 08:27:12.279445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.616 [2024-11-20 08:27:12.279453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:113560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-20 08:27:12.279459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.616 [2024-11-20 08:27:12.279467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:113568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-20 08:27:12.279473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.616 [2024-11-20 08:27:12.279481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:113576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-20 08:27:12.279488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.616 [2024-11-20 08:27:12.279496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-20 08:27:12.279502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.616 [2024-11-20 08:27:12.279510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:113592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-20 08:27:12.279516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.616 [2024-11-20 08:27:12.279524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:113600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-20 08:27:12.279530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.616 [2024-11-20 08:27:12.279540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:58.616 [2024-11-20 08:27:12.279547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.616 [2024-11-20 08:27:12.279556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2437e10 is same with the state(6) to be set 00:29:58.616 [2024-11-20 08:27:12.279565] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:29:58.616 [2024-11-20 08:27:12.279571] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:29:58.616 [2024-11-20 08:27:12.279577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:113616 len:8 PRP1 0x0 PRP2 0x0 00:29:58.616 [2024-11-20 08:27:12.279586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:58.616 [2024-11-20 08:27:12.282480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.616 [2024-11-20 08:27:12.282532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.616 [2024-11-20 08:27:12.283044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.616 [2024-11-20 08:27:12.283062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.616 [2024-11-20 08:27:12.283070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.616 [2024-11-20 08:27:12.283253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.616 [2024-11-20 08:27:12.283428] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.616 [2024-11-20 08:27:12.283437] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.616 [2024-11-20 08:27:12.283446] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.616 [2024-11-20 08:27:12.283454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.616 [2024-11-20 08:27:12.295575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.616 [2024-11-20 08:27:12.295942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.616 [2024-11-20 08:27:12.295989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.616 [2024-11-20 08:27:12.296014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.616 [2024-11-20 08:27:12.296607] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.616 [2024-11-20 08:27:12.297171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.616 [2024-11-20 08:27:12.297181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.616 [2024-11-20 08:27:12.297189] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.616 [2024-11-20 08:27:12.297197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.616 [2024-11-20 08:27:12.308537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.616 [2024-11-20 08:27:12.308907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.616 [2024-11-20 08:27:12.308945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.616 [2024-11-20 08:27:12.308972] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.616 [2024-11-20 08:27:12.309567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.616 [2024-11-20 08:27:12.310134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.616 [2024-11-20 08:27:12.310144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.616 [2024-11-20 08:27:12.310150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.616 [2024-11-20 08:27:12.310156] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.616 [2024-11-20 08:27:12.321508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.616 [2024-11-20 08:27:12.321951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.616 [2024-11-20 08:27:12.321997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.616 [2024-11-20 08:27:12.322022] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.616 [2024-11-20 08:27:12.322614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.616 [2024-11-20 08:27:12.323200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.616 [2024-11-20 08:27:12.323236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.616 [2024-11-20 08:27:12.323265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.616 [2024-11-20 08:27:12.323272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.616 [2024-11-20 08:27:12.334320] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.616 [2024-11-20 08:27:12.334621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.616 [2024-11-20 08:27:12.334638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.616 [2024-11-20 08:27:12.334645] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.616 [2024-11-20 08:27:12.334823] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.616 [2024-11-20 08:27:12.334991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.616 [2024-11-20 08:27:12.335001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.616 [2024-11-20 08:27:12.335009] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.616 [2024-11-20 08:27:12.335016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.616 [2024-11-20 08:27:12.347266] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.616 [2024-11-20 08:27:12.347609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.616 [2024-11-20 08:27:12.347626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.616 [2024-11-20 08:27:12.347633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.616 [2024-11-20 08:27:12.347792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.616 [2024-11-20 08:27:12.347950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.616 [2024-11-20 08:27:12.347960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.616 [2024-11-20 08:27:12.347966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.616 [2024-11-20 08:27:12.347975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.616 [2024-11-20 08:27:12.360105] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.616 [2024-11-20 08:27:12.360386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.616 [2024-11-20 08:27:12.360403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.616 [2024-11-20 08:27:12.360411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.616 [2024-11-20 08:27:12.360569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.617 [2024-11-20 08:27:12.360727] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.617 [2024-11-20 08:27:12.360738] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.617 [2024-11-20 08:27:12.360744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.617 [2024-11-20 08:27:12.360751] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.617 [2024-11-20 08:27:12.372993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.617 [2024-11-20 08:27:12.373450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.617 [2024-11-20 08:27:12.373498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.617 [2024-11-20 08:27:12.373523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.617 [2024-11-20 08:27:12.374074] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.617 [2024-11-20 08:27:12.374240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.617 [2024-11-20 08:27:12.374250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.617 [2024-11-20 08:27:12.374257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.617 [2024-11-20 08:27:12.374263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.617 [2024-11-20 08:27:12.385839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.617 [2024-11-20 08:27:12.386190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.617 [2024-11-20 08:27:12.386213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.617 [2024-11-20 08:27:12.386221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.617 [2024-11-20 08:27:12.386400] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.617 [2024-11-20 08:27:12.386569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.617 [2024-11-20 08:27:12.386579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.617 [2024-11-20 08:27:12.386585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.617 [2024-11-20 08:27:12.386591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.617 [2024-11-20 08:27:12.398681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.617 [2024-11-20 08:27:12.399074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.617 [2024-11-20 08:27:12.399091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.617 [2024-11-20 08:27:12.399099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.617 [2024-11-20 08:27:12.399283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.617 [2024-11-20 08:27:12.399452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.617 [2024-11-20 08:27:12.399462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.617 [2024-11-20 08:27:12.399469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.617 [2024-11-20 08:27:12.399475] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.617 [2024-11-20 08:27:12.411532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.617 [2024-11-20 08:27:12.411856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.617 [2024-11-20 08:27:12.411874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.617 [2024-11-20 08:27:12.411881] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.617 [2024-11-20 08:27:12.412039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.617 [2024-11-20 08:27:12.412199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.617 [2024-11-20 08:27:12.412216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.617 [2024-11-20 08:27:12.412223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.617 [2024-11-20 08:27:12.412230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.617 [2024-11-20 08:27:12.424281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.617 [2024-11-20 08:27:12.424642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.617 [2024-11-20 08:27:12.424659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.617 [2024-11-20 08:27:12.424667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.617 [2024-11-20 08:27:12.424834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.617 [2024-11-20 08:27:12.425002] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.617 [2024-11-20 08:27:12.425012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.617 [2024-11-20 08:27:12.425018] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.617 [2024-11-20 08:27:12.425025] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.617 [2024-11-20 08:27:12.437271] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.617 [2024-11-20 08:27:12.437661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.617 [2024-11-20 08:27:12.437678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.617 [2024-11-20 08:27:12.437689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.617 [2024-11-20 08:27:12.437847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.617 [2024-11-20 08:27:12.438007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.617 [2024-11-20 08:27:12.438015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.617 [2024-11-20 08:27:12.438021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.617 [2024-11-20 08:27:12.438027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.617 [2024-11-20 08:27:12.450247] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.617 [2024-11-20 08:27:12.450599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.617 [2024-11-20 08:27:12.450617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.617 [2024-11-20 08:27:12.450625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.617 [2024-11-20 08:27:12.450797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.617 [2024-11-20 08:27:12.450969] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.617 [2024-11-20 08:27:12.450979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.617 [2024-11-20 08:27:12.450986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.617 [2024-11-20 08:27:12.450993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.617 [2024-11-20 08:27:12.463275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.617 [2024-11-20 08:27:12.463624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.617 [2024-11-20 08:27:12.463643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.617 [2024-11-20 08:27:12.463650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.617 [2024-11-20 08:27:12.463817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.617 [2024-11-20 08:27:12.463985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.617 [2024-11-20 08:27:12.463993] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.617 [2024-11-20 08:27:12.464000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.617 [2024-11-20 08:27:12.464006] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.617 [2024-11-20 08:27:12.476303] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.617 [2024-11-20 08:27:12.476587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.618 [2024-11-20 08:27:12.476605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.618 [2024-11-20 08:27:12.476613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.618 [2024-11-20 08:27:12.476785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.618 [2024-11-20 08:27:12.476960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.618 [2024-11-20 08:27:12.476969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.618 [2024-11-20 08:27:12.476976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.618 [2024-11-20 08:27:12.476982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.618 [2024-11-20 08:27:12.489336] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.618 [2024-11-20 08:27:12.489736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.618 [2024-11-20 08:27:12.489754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.618 [2024-11-20 08:27:12.489762] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.618 [2024-11-20 08:27:12.490336] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.618 [2024-11-20 08:27:12.490506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.618 [2024-11-20 08:27:12.490516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.618 [2024-11-20 08:27:12.490522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.618 [2024-11-20 08:27:12.490529] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.618 [2024-11-20 08:27:12.502280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.618 [2024-11-20 08:27:12.502692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.618 [2024-11-20 08:27:12.502711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.618 [2024-11-20 08:27:12.502718] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.618 [2024-11-20 08:27:12.502891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.618 [2024-11-20 08:27:12.503064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.618 [2024-11-20 08:27:12.503073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.618 [2024-11-20 08:27:12.503080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.618 [2024-11-20 08:27:12.503087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.618 [2024-11-20 08:27:12.515251] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.618 [2024-11-20 08:27:12.515658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.618 [2024-11-20 08:27:12.515677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.618 [2024-11-20 08:27:12.515685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.618 [2024-11-20 08:27:12.515858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.618 [2024-11-20 08:27:12.516034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.618 [2024-11-20 08:27:12.516045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.618 [2024-11-20 08:27:12.516051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.618 [2024-11-20 08:27:12.516062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.618 [2024-11-20 08:27:12.528197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.618 [2024-11-20 08:27:12.528532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.618 [2024-11-20 08:27:12.528550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.618 [2024-11-20 08:27:12.528559] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.618 [2024-11-20 08:27:12.528730] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.618 [2024-11-20 08:27:12.528904] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.618 [2024-11-20 08:27:12.528914] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.618 [2024-11-20 08:27:12.528921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.618 [2024-11-20 08:27:12.528927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.618 [2024-11-20 08:27:12.541277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.618 [2024-11-20 08:27:12.541563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.618 [2024-11-20 08:27:12.541580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.618 [2024-11-20 08:27:12.541588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.618 [2024-11-20 08:27:12.541760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.618 [2024-11-20 08:27:12.541932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.618 [2024-11-20 08:27:12.541942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.618 [2024-11-20 08:27:12.541949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.618 [2024-11-20 08:27:12.541957] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.618 [2024-11-20 08:27:12.554327] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.618 [2024-11-20 08:27:12.554665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.618 [2024-11-20 08:27:12.554684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.618 [2024-11-20 08:27:12.554693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.618 [2024-11-20 08:27:12.554865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.618 [2024-11-20 08:27:12.555038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.618 [2024-11-20 08:27:12.555046] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.618 [2024-11-20 08:27:12.555053] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.618 [2024-11-20 08:27:12.555060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.618 [2024-11-20 08:27:12.567328] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.618 [2024-11-20 08:27:12.567679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.618 [2024-11-20 08:27:12.567696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.618 [2024-11-20 08:27:12.567704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.618 [2024-11-20 08:27:12.567872] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.618 [2024-11-20 08:27:12.568040] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.618 [2024-11-20 08:27:12.568049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.618 [2024-11-20 08:27:12.568056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.618 [2024-11-20 08:27:12.568062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.618 [2024-11-20 08:27:12.580319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.618 [2024-11-20 08:27:12.580674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.618 [2024-11-20 08:27:12.580693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.618 [2024-11-20 08:27:12.580701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.618 [2024-11-20 08:27:12.580868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.618 [2024-11-20 08:27:12.581037] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.618 [2024-11-20 08:27:12.581045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.618 [2024-11-20 08:27:12.581051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.618 [2024-11-20 08:27:12.581057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.618 [2024-11-20 08:27:12.593136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.618 [2024-11-20 08:27:12.593414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.618 [2024-11-20 08:27:12.593432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.618 [2024-11-20 08:27:12.593439] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.618 [2024-11-20 08:27:12.593598] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.618 [2024-11-20 08:27:12.593758] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.618 [2024-11-20 08:27:12.593766] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.618 [2024-11-20 08:27:12.593772] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.618 [2024-11-20 08:27:12.593778] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.618 [2024-11-20 08:27:12.606057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.618 [2024-11-20 08:27:12.606488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.619 [2024-11-20 08:27:12.606536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.619 [2024-11-20 08:27:12.606568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.619 [2024-11-20 08:27:12.606933] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.619 [2024-11-20 08:27:12.607095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.619 [2024-11-20 08:27:12.607105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.619 [2024-11-20 08:27:12.607111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.619 [2024-11-20 08:27:12.607117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.619 [2024-11-20 08:27:12.619000] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.619 [2024-11-20 08:27:12.619372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.619 [2024-11-20 08:27:12.619391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.619 [2024-11-20 08:27:12.619398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.619 [2024-11-20 08:27:12.619557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.619 [2024-11-20 08:27:12.619715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.619 [2024-11-20 08:27:12.619724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.619 [2024-11-20 08:27:12.619730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.619 [2024-11-20 08:27:12.619736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.879 10184.33 IOPS, 39.78 MiB/s [2024-11-20T07:27:12.907Z] [2024-11-20 08:27:12.631995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.879 [2024-11-20 08:27:12.632443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.879 [2024-11-20 08:27:12.632490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.879 [2024-11-20 08:27:12.632516] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.879 [2024-11-20 08:27:12.632984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.879 [2024-11-20 08:27:12.633158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.879 [2024-11-20 08:27:12.633168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.879 [2024-11-20 08:27:12.633175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.879 [2024-11-20 08:27:12.633183] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.879 [2024-11-20 08:27:12.644789] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.879 [2024-11-20 08:27:12.645217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.879 [2024-11-20 08:27:12.645251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.879 [2024-11-20 08:27:12.645259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.879 [2024-11-20 08:27:12.645432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.879 [2024-11-20 08:27:12.645615] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.879 [2024-11-20 08:27:12.645625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.879 [2024-11-20 08:27:12.645631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.879 [2024-11-20 08:27:12.645638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.879 [2024-11-20 08:27:12.657635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.879 [2024-11-20 08:27:12.657988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.879 [2024-11-20 08:27:12.658032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.879 [2024-11-20 08:27:12.658056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.879 [2024-11-20 08:27:12.658647] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.879 [2024-11-20 08:27:12.659096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.879 [2024-11-20 08:27:12.659106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.879 [2024-11-20 08:27:12.659112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.879 [2024-11-20 08:27:12.659118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.879 [2024-11-20 08:27:12.672621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.879 [2024-11-20 08:27:12.673139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.879 [2024-11-20 08:27:12.673185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.879 [2024-11-20 08:27:12.673225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.879 [2024-11-20 08:27:12.673772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.879 [2024-11-20 08:27:12.674027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.879 [2024-11-20 08:27:12.674041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.879 [2024-11-20 08:27:12.674051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.879 [2024-11-20 08:27:12.674060] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.879 [2024-11-20 08:27:12.685613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.879 [2024-11-20 08:27:12.686035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.879 [2024-11-20 08:27:12.686054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.879 [2024-11-20 08:27:12.686061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.879 [2024-11-20 08:27:12.686237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.879 [2024-11-20 08:27:12.686406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.879 [2024-11-20 08:27:12.686415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.879 [2024-11-20 08:27:12.686426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.879 [2024-11-20 08:27:12.686433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.879 [2024-11-20 08:27:12.698390] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.879 [2024-11-20 08:27:12.698785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.879 [2024-11-20 08:27:12.698802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.879 [2024-11-20 08:27:12.698811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.879 [2024-11-20 08:27:12.698969] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.879 [2024-11-20 08:27:12.699128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.879 [2024-11-20 08:27:12.699137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.879 [2024-11-20 08:27:12.699143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.879 [2024-11-20 08:27:12.699150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.879 [2024-11-20 08:27:12.711448] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.879 [2024-11-20 08:27:12.711866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.879 [2024-11-20 08:27:12.711907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.879 [2024-11-20 08:27:12.711933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.879 [2024-11-20 08:27:12.712529] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.879 [2024-11-20 08:27:12.713116] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.879 [2024-11-20 08:27:12.713142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.879 [2024-11-20 08:27:12.713164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.879 [2024-11-20 08:27:12.713184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.879 [2024-11-20 08:27:12.724170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.879 [2024-11-20 08:27:12.724585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.879 [2024-11-20 08:27:12.724603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.879 [2024-11-20 08:27:12.724611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.879 [2024-11-20 08:27:12.724770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.879 [2024-11-20 08:27:12.724928] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.879 [2024-11-20 08:27:12.724938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.879 [2024-11-20 08:27:12.724945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.879 [2024-11-20 08:27:12.724952] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.879 [2024-11-20 08:27:12.737087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.879 [2024-11-20 08:27:12.737501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.879 [2024-11-20 08:27:12.737518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.879 [2024-11-20 08:27:12.737526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.879 [2024-11-20 08:27:12.737683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.879 [2024-11-20 08:27:12.737842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.879 [2024-11-20 08:27:12.737851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.880 [2024-11-20 08:27:12.737857] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.880 [2024-11-20 08:27:12.737864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.880 [2024-11-20 08:27:12.749836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.880 [2024-11-20 08:27:12.750251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.880 [2024-11-20 08:27:12.750301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.880 [2024-11-20 08:27:12.750325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.880 [2024-11-20 08:27:12.750906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.880 [2024-11-20 08:27:12.751417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.880 [2024-11-20 08:27:12.751428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.880 [2024-11-20 08:27:12.751435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.880 [2024-11-20 08:27:12.751441] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.880 [2024-11-20 08:27:12.762584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.880 [2024-11-20 08:27:12.762993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.880 [2024-11-20 08:27:12.763010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.880 [2024-11-20 08:27:12.763018] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.880 [2024-11-20 08:27:12.763176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.880 [2024-11-20 08:27:12.763366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.880 [2024-11-20 08:27:12.763376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.880 [2024-11-20 08:27:12.763383] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.880 [2024-11-20 08:27:12.763390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.880 [2024-11-20 08:27:12.775333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.880 [2024-11-20 08:27:12.775745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.880 [2024-11-20 08:27:12.775763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.880 [2024-11-20 08:27:12.775776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.880 [2024-11-20 08:27:12.775936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.880 [2024-11-20 08:27:12.776095] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.880 [2024-11-20 08:27:12.776105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.880 [2024-11-20 08:27:12.776111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.880 [2024-11-20 08:27:12.776117] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.880 [2024-11-20 08:27:12.788217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.880 [2024-11-20 08:27:12.788643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.880 [2024-11-20 08:27:12.788660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.880 [2024-11-20 08:27:12.788668] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.880 [2024-11-20 08:27:12.788836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.880 [2024-11-20 08:27:12.789004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.880 [2024-11-20 08:27:12.789014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.880 [2024-11-20 08:27:12.789021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.880 [2024-11-20 08:27:12.789028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.880 [2024-11-20 08:27:12.801220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.880 [2024-11-20 08:27:12.801650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.880 [2024-11-20 08:27:12.801668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.880 [2024-11-20 08:27:12.801676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.880 [2024-11-20 08:27:12.801848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.880 [2024-11-20 08:27:12.802020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.880 [2024-11-20 08:27:12.802030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.880 [2024-11-20 08:27:12.802036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.880 [2024-11-20 08:27:12.802043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.880 [2024-11-20 08:27:12.814179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.880 [2024-11-20 08:27:12.814603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.880 [2024-11-20 08:27:12.814622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.880 [2024-11-20 08:27:12.814629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.880 [2024-11-20 08:27:12.814798] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.880 [2024-11-20 08:27:12.814968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.880 [2024-11-20 08:27:12.814978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.880 [2024-11-20 08:27:12.814985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.880 [2024-11-20 08:27:12.814992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.880 [2024-11-20 08:27:12.826921] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.880 [2024-11-20 08:27:12.827276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.880 [2024-11-20 08:27:12.827324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.880 [2024-11-20 08:27:12.827349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.880 [2024-11-20 08:27:12.827927] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.880 [2024-11-20 08:27:12.828481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.880 [2024-11-20 08:27:12.828491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.880 [2024-11-20 08:27:12.828498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.880 [2024-11-20 08:27:12.828504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.880 [2024-11-20 08:27:12.839837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.880 [2024-11-20 08:27:12.840259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.880 [2024-11-20 08:27:12.840276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.880 [2024-11-20 08:27:12.840284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.880 [2024-11-20 08:27:12.840443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.880 [2024-11-20 08:27:12.840602] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.880 [2024-11-20 08:27:12.840612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.880 [2024-11-20 08:27:12.840618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.880 [2024-11-20 08:27:12.840624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.880 [2024-11-20 08:27:12.852598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.880 [2024-11-20 08:27:12.853014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.880 [2024-11-20 08:27:12.853058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.880 [2024-11-20 08:27:12.853083] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.880 [2024-11-20 08:27:12.853577] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.880 [2024-11-20 08:27:12.853746] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.880 [2024-11-20 08:27:12.853755] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.880 [2024-11-20 08:27:12.853765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.880 [2024-11-20 08:27:12.853771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.880 [2024-11-20 08:27:12.865342] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.880 [2024-11-20 08:27:12.865756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.880 [2024-11-20 08:27:12.865803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.880 [2024-11-20 08:27:12.865828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.880 [2024-11-20 08:27:12.866396] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.880 [2024-11-20 08:27:12.866567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.881 [2024-11-20 08:27:12.866577] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.881 [2024-11-20 08:27:12.866584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.881 [2024-11-20 08:27:12.866590] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.881 [2024-11-20 08:27:12.878167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.881 [2024-11-20 08:27:12.878529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.881 [2024-11-20 08:27:12.878547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.881 [2024-11-20 08:27:12.878554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.881 [2024-11-20 08:27:12.878712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.881 [2024-11-20 08:27:12.878871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.881 [2024-11-20 08:27:12.878881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.881 [2024-11-20 08:27:12.878887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.881 [2024-11-20 08:27:12.878893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:58.881 [2024-11-20 08:27:12.890912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:58.881 [2024-11-20 08:27:12.891242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:58.881 [2024-11-20 08:27:12.891260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:58.881 [2024-11-20 08:27:12.891268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:58.881 [2024-11-20 08:27:12.891426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:58.881 [2024-11-20 08:27:12.891585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:58.881 [2024-11-20 08:27:12.891595] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:58.881 [2024-11-20 08:27:12.891601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:58.881 [2024-11-20 08:27:12.891607] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.140 [2024-11-20 08:27:12.903902] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.140 [2024-11-20 08:27:12.904334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.140 [2024-11-20 08:27:12.904378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.140 [2024-11-20 08:27:12.904403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.140 [2024-11-20 08:27:12.904981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.140 [2024-11-20 08:27:12.905595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.140 [2024-11-20 08:27:12.905624] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.140 [2024-11-20 08:27:12.905649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.140 [2024-11-20 08:27:12.905656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.140 [2024-11-20 08:27:12.916764] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.140 [2024-11-20 08:27:12.917130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.140 [2024-11-20 08:27:12.917148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.140 [2024-11-20 08:27:12.917156] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.140 [2024-11-20 08:27:12.917329] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.140 [2024-11-20 08:27:12.917506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.140 [2024-11-20 08:27:12.917515] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.140 [2024-11-20 08:27:12.917522] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.140 [2024-11-20 08:27:12.917528] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.140 [2024-11-20 08:27:12.929703] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.141 [2024-11-20 08:27:12.930118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.141 [2024-11-20 08:27:12.930135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.141 [2024-11-20 08:27:12.930143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.141 [2024-11-20 08:27:12.930311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.141 [2024-11-20 08:27:12.930470] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.141 [2024-11-20 08:27:12.930480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.141 [2024-11-20 08:27:12.930486] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.141 [2024-11-20 08:27:12.930492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.141 [2024-11-20 08:27:12.942627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.141 [2024-11-20 08:27:12.943040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.141 [2024-11-20 08:27:12.943086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.141 [2024-11-20 08:27:12.943118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.141 [2024-11-20 08:27:12.943542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.141 [2024-11-20 08:27:12.943704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.141 [2024-11-20 08:27:12.943713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.141 [2024-11-20 08:27:12.943719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.141 [2024-11-20 08:27:12.943726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.141 [2024-11-20 08:27:12.955450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.141 [2024-11-20 08:27:12.955822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.141 [2024-11-20 08:27:12.955867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.141 [2024-11-20 08:27:12.955891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.141 [2024-11-20 08:27:12.956480] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.141 [2024-11-20 08:27:12.956669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.141 [2024-11-20 08:27:12.956679] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.141 [2024-11-20 08:27:12.956685] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.141 [2024-11-20 08:27:12.956692] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.141 [2024-11-20 08:27:12.968222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.141 [2024-11-20 08:27:12.968656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.141 [2024-11-20 08:27:12.968700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.141 [2024-11-20 08:27:12.968724] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.141 [2024-11-20 08:27:12.969315] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.141 [2024-11-20 08:27:12.969812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.141 [2024-11-20 08:27:12.969822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.141 [2024-11-20 08:27:12.969828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.141 [2024-11-20 08:27:12.969835] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.141 [2024-11-20 08:27:12.980980] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.141 [2024-11-20 08:27:12.981432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.141 [2024-11-20 08:27:12.981476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.141 [2024-11-20 08:27:12.981501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.141 [2024-11-20 08:27:12.982058] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.141 [2024-11-20 08:27:12.982463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.141 [2024-11-20 08:27:12.982484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.141 [2024-11-20 08:27:12.982498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.141 [2024-11-20 08:27:12.982513] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.141 [2024-11-20 08:27:12.996172] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.141 [2024-11-20 08:27:12.996705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.141 [2024-11-20 08:27:12.996750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.141 [2024-11-20 08:27:12.996773] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.141 [2024-11-20 08:27:12.997366] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.141 [2024-11-20 08:27:12.997931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.141 [2024-11-20 08:27:12.997944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.141 [2024-11-20 08:27:12.997954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.141 [2024-11-20 08:27:12.997964] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.141 [2024-11-20 08:27:13.009082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.141 [2024-11-20 08:27:13.009517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.141 [2024-11-20 08:27:13.009563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.141 [2024-11-20 08:27:13.009587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.141 [2024-11-20 08:27:13.009992] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.141 [2024-11-20 08:27:13.010162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.141 [2024-11-20 08:27:13.010172] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.141 [2024-11-20 08:27:13.010178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.141 [2024-11-20 08:27:13.010184] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.141 [2024-11-20 08:27:13.021866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.141 [2024-11-20 08:27:13.022291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.141 [2024-11-20 08:27:13.022309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.141 [2024-11-20 08:27:13.022317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.141 [2024-11-20 08:27:13.022475] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.141 [2024-11-20 08:27:13.022635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.141 [2024-11-20 08:27:13.022645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.141 [2024-11-20 08:27:13.022654] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.141 [2024-11-20 08:27:13.022661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.141 [2024-11-20 08:27:13.034632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.141 [2024-11-20 08:27:13.034971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.141 [2024-11-20 08:27:13.034988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.141 [2024-11-20 08:27:13.034995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.141 [2024-11-20 08:27:13.035153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.141 [2024-11-20 08:27:13.035340] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.141 [2024-11-20 08:27:13.035350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.141 [2024-11-20 08:27:13.035357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.141 [2024-11-20 08:27:13.035364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.141 [2024-11-20 08:27:13.047430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.141 [2024-11-20 08:27:13.047839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.141 [2024-11-20 08:27:13.047857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.141 [2024-11-20 08:27:13.047864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.141 [2024-11-20 08:27:13.048032] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.141 [2024-11-20 08:27:13.048200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.141 [2024-11-20 08:27:13.048216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.141 [2024-11-20 08:27:13.048223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.141 [2024-11-20 08:27:13.048230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.142 [2024-11-20 08:27:13.060468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.142 [2024-11-20 08:27:13.060826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.142 [2024-11-20 08:27:13.060844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.142 [2024-11-20 08:27:13.060852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.142 [2024-11-20 08:27:13.061024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.142 [2024-11-20 08:27:13.061197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.142 [2024-11-20 08:27:13.061213] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.142 [2024-11-20 08:27:13.061220] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.142 [2024-11-20 08:27:13.061227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.142 [2024-11-20 08:27:13.073317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.142 [2024-11-20 08:27:13.073700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.142 [2024-11-20 08:27:13.073744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.142 [2024-11-20 08:27:13.073768] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.142 [2024-11-20 08:27:13.074262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.142 [2024-11-20 08:27:13.074432] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.142 [2024-11-20 08:27:13.074441] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.142 [2024-11-20 08:27:13.074448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.142 [2024-11-20 08:27:13.074455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.142 [2024-11-20 08:27:13.086158] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.142 [2024-11-20 08:27:13.086558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.142 [2024-11-20 08:27:13.086575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.142 [2024-11-20 08:27:13.086584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.142 [2024-11-20 08:27:13.086742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.142 [2024-11-20 08:27:13.086901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.142 [2024-11-20 08:27:13.086911] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.142 [2024-11-20 08:27:13.086917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.142 [2024-11-20 08:27:13.086923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.142 [2024-11-20 08:27:13.099034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.142 [2024-11-20 08:27:13.099398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.142 [2024-11-20 08:27:13.099415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.142 [2024-11-20 08:27:13.099423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.142 [2024-11-20 08:27:13.099582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.142 [2024-11-20 08:27:13.099741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.142 [2024-11-20 08:27:13.099750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.142 [2024-11-20 08:27:13.099756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.142 [2024-11-20 08:27:13.099763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.142 [2024-11-20 08:27:13.111781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.142 [2024-11-20 08:27:13.112200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.142 [2024-11-20 08:27:13.112258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.142 [2024-11-20 08:27:13.112291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.142 [2024-11-20 08:27:13.112869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.142 [2024-11-20 08:27:13.113079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.142 [2024-11-20 08:27:13.113096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.142 [2024-11-20 08:27:13.113103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.142 [2024-11-20 08:27:13.113109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.142 [2024-11-20 08:27:13.124488] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.142 [2024-11-20 08:27:13.124904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.142 [2024-11-20 08:27:13.124921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.142 [2024-11-20 08:27:13.124929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.142 [2024-11-20 08:27:13.125088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.142 [2024-11-20 08:27:13.125253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.142 [2024-11-20 08:27:13.125263] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.142 [2024-11-20 08:27:13.125270] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.142 [2024-11-20 08:27:13.125277] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.142 [2024-11-20 08:27:13.137234] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.142 [2024-11-20 08:27:13.137644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.142 [2024-11-20 08:27:13.137662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.142 [2024-11-20 08:27:13.137669] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.142 [2024-11-20 08:27:13.137827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.142 [2024-11-20 08:27:13.137986] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.142 [2024-11-20 08:27:13.137995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.142 [2024-11-20 08:27:13.138001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.142 [2024-11-20 08:27:13.138008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.142 [2024-11-20 08:27:13.150134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.142 [2024-11-20 08:27:13.150566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.142 [2024-11-20 08:27:13.150605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.142 [2024-11-20 08:27:13.150631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.142 [2024-11-20 08:27:13.151176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.142 [2024-11-20 08:27:13.151354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.142 [2024-11-20 08:27:13.151365] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.142 [2024-11-20 08:27:13.151372] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.142 [2024-11-20 08:27:13.151379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.403 [2024-11-20 08:27:13.163151] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.403 [2024-11-20 08:27:13.163453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.403 [2024-11-20 08:27:13.163471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.403 [2024-11-20 08:27:13.163480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.403 [2024-11-20 08:27:13.163652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.403 [2024-11-20 08:27:13.163825] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.403 [2024-11-20 08:27:13.163834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.403 [2024-11-20 08:27:13.163841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.403 [2024-11-20 08:27:13.163848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.403 [2024-11-20 08:27:13.176152] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.403 [2024-11-20 08:27:13.176504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.403 [2024-11-20 08:27:13.176522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.403 [2024-11-20 08:27:13.176530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.403 [2024-11-20 08:27:13.176697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.403 [2024-11-20 08:27:13.176864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.403 [2024-11-20 08:27:13.176874] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.403 [2024-11-20 08:27:13.176881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.403 [2024-11-20 08:27:13.176888] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.403 [2024-11-20 08:27:13.189205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.403 [2024-11-20 08:27:13.189650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.403 [2024-11-20 08:27:13.189668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.403 [2024-11-20 08:27:13.189676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.403 [2024-11-20 08:27:13.189849] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.403 [2024-11-20 08:27:13.190021] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.403 [2024-11-20 08:27:13.190031] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.403 [2024-11-20 08:27:13.190041] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.403 [2024-11-20 08:27:13.190048] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.403 [2024-11-20 08:27:13.202101] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.403 [2024-11-20 08:27:13.202530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.403 [2024-11-20 08:27:13.202548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.403 [2024-11-20 08:27:13.202555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.403 [2024-11-20 08:27:13.202723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.403 [2024-11-20 08:27:13.202891] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.403 [2024-11-20 08:27:13.202901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.403 [2024-11-20 08:27:13.202907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.403 [2024-11-20 08:27:13.202914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.403 [2024-11-20 08:27:13.215085] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.403 [2024-11-20 08:27:13.215434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.403 [2024-11-20 08:27:13.215452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.403 [2024-11-20 08:27:13.215460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.403 [2024-11-20 08:27:13.215633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.403 [2024-11-20 08:27:13.215803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.403 [2024-11-20 08:27:13.215813] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.403 [2024-11-20 08:27:13.215820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.403 [2024-11-20 08:27:13.215826] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.403 [2024-11-20 08:27:13.227872] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.403 [2024-11-20 08:27:13.228300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.403 [2024-11-20 08:27:13.228349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.403 [2024-11-20 08:27:13.228373] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.403 [2024-11-20 08:27:13.228952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.403 [2024-11-20 08:27:13.229195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.403 [2024-11-20 08:27:13.229211] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.403 [2024-11-20 08:27:13.229217] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.403 [2024-11-20 08:27:13.229224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.403 [2024-11-20 08:27:13.240646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.403 [2024-11-20 08:27:13.241034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.403 [2024-11-20 08:27:13.241051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.404 [2024-11-20 08:27:13.241060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.404 [2024-11-20 08:27:13.241226] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.404 [2024-11-20 08:27:13.241410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.404 [2024-11-20 08:27:13.241420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.404 [2024-11-20 08:27:13.241426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.404 [2024-11-20 08:27:13.241433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.404 [2024-11-20 08:27:13.253386] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.404 [2024-11-20 08:27:13.253797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.404 [2024-11-20 08:27:13.253839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.404 [2024-11-20 08:27:13.253865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.404 [2024-11-20 08:27:13.254413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.404 [2024-11-20 08:27:13.254583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.404 [2024-11-20 08:27:13.254593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.404 [2024-11-20 08:27:13.254599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.404 [2024-11-20 08:27:13.254606] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.404 [2024-11-20 08:27:13.266146] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.404 [2024-11-20 08:27:13.266539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.404 [2024-11-20 08:27:13.266556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.404 [2024-11-20 08:27:13.266563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.404 [2024-11-20 08:27:13.266723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.404 [2024-11-20 08:27:13.266882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.404 [2024-11-20 08:27:13.266892] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.404 [2024-11-20 08:27:13.266899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.404 [2024-11-20 08:27:13.266905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.404 [2024-11-20 08:27:13.278881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.404 [2024-11-20 08:27:13.279299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.404 [2024-11-20 08:27:13.279315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.404 [2024-11-20 08:27:13.279326] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.404 [2024-11-20 08:27:13.279485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.404 [2024-11-20 08:27:13.279644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.404 [2024-11-20 08:27:13.279652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.404 [2024-11-20 08:27:13.279658] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.404 [2024-11-20 08:27:13.279664] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.404 [2024-11-20 08:27:13.291797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.404 [2024-11-20 08:27:13.292197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.404 [2024-11-20 08:27:13.292326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.404 [2024-11-20 08:27:13.292335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.404 [2024-11-20 08:27:13.292508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.404 [2024-11-20 08:27:13.292667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.404 [2024-11-20 08:27:13.292676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.404 [2024-11-20 08:27:13.292683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.404 [2024-11-20 08:27:13.292689] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.404 [2024-11-20 08:27:13.304516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.404 [2024-11-20 08:27:13.304871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.404 [2024-11-20 08:27:13.304888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.404 [2024-11-20 08:27:13.304896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.404 [2024-11-20 08:27:13.305064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.404 [2024-11-20 08:27:13.305246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.404 [2024-11-20 08:27:13.305272] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.404 [2024-11-20 08:27:13.305280] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.404 [2024-11-20 08:27:13.305287] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.404 [2024-11-20 08:27:13.317498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.404 [2024-11-20 08:27:13.317921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.404 [2024-11-20 08:27:13.317939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.404 [2024-11-20 08:27:13.317948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.404 [2024-11-20 08:27:13.318122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.404 [2024-11-20 08:27:13.318303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.404 [2024-11-20 08:27:13.318314] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.404 [2024-11-20 08:27:13.318320] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.404 [2024-11-20 08:27:13.318327] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.404 [2024-11-20 08:27:13.330285] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.404 [2024-11-20 08:27:13.330692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.404 [2024-11-20 08:27:13.330731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.404 [2024-11-20 08:27:13.330756] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.404 [2024-11-20 08:27:13.331349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.404 [2024-11-20 08:27:13.331529] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.404 [2024-11-20 08:27:13.331539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.404 [2024-11-20 08:27:13.331545] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.404 [2024-11-20 08:27:13.331551] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.404 [2024-11-20 08:27:13.343086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.404 [2024-11-20 08:27:13.343519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.404 [2024-11-20 08:27:13.343537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.404 [2024-11-20 08:27:13.343544] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.404 [2024-11-20 08:27:13.343711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.404 [2024-11-20 08:27:13.343879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.404 [2024-11-20 08:27:13.343888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.405 [2024-11-20 08:27:13.343895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.405 [2024-11-20 08:27:13.343901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.405 [2024-11-20 08:27:13.355878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.405 [2024-11-20 08:27:13.356226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.405 [2024-11-20 08:27:13.356244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.405 [2024-11-20 08:27:13.356252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.405 [2024-11-20 08:27:13.356411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.405 [2024-11-20 08:27:13.356570] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.405 [2024-11-20 08:27:13.356579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.405 [2024-11-20 08:27:13.356589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.405 [2024-11-20 08:27:13.356597] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.405 [2024-11-20 08:27:13.368672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.405 [2024-11-20 08:27:13.369064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.405 [2024-11-20 08:27:13.369081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.405 [2024-11-20 08:27:13.369090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.405 [2024-11-20 08:27:13.369270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.405 [2024-11-20 08:27:13.369438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.405 [2024-11-20 08:27:13.369447] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.405 [2024-11-20 08:27:13.369454] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.405 [2024-11-20 08:27:13.369461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.405 [2024-11-20 08:27:13.381445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.405 [2024-11-20 08:27:13.381806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.405 [2024-11-20 08:27:13.381851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.405 [2024-11-20 08:27:13.381875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.405 [2024-11-20 08:27:13.382471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.405 [2024-11-20 08:27:13.382943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.405 [2024-11-20 08:27:13.382952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.405 [2024-11-20 08:27:13.382959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.405 [2024-11-20 08:27:13.382965] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.405 [2024-11-20 08:27:13.394181] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.405 [2024-11-20 08:27:13.394603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.405 [2024-11-20 08:27:13.394649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.405 [2024-11-20 08:27:13.394672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.405 [2024-11-20 08:27:13.395265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.405 [2024-11-20 08:27:13.395818] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.405 [2024-11-20 08:27:13.395828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.405 [2024-11-20 08:27:13.395835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.405 [2024-11-20 08:27:13.395841] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.405 [2024-11-20 08:27:13.406970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.405 [2024-11-20 08:27:13.407372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.405 [2024-11-20 08:27:13.407389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.405 [2024-11-20 08:27:13.407397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.405 [2024-11-20 08:27:13.407555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.405 [2024-11-20 08:27:13.407714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.405 [2024-11-20 08:27:13.407724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.405 [2024-11-20 08:27:13.407730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.405 [2024-11-20 08:27:13.407736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.405 [2024-11-20 08:27:13.419815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.405 [2024-11-20 08:27:13.420250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.405 [2024-11-20 08:27:13.420292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.405 [2024-11-20 08:27:13.420318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.405 [2024-11-20 08:27:13.420833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.405 [2024-11-20 08:27:13.420993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.405 [2024-11-20 08:27:13.421002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.405 [2024-11-20 08:27:13.421008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.405 [2024-11-20 08:27:13.421015] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.664 [2024-11-20 08:27:13.432732] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.664 [2024-11-20 08:27:13.433168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.664 [2024-11-20 08:27:13.433185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.664 [2024-11-20 08:27:13.433193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.664 [2024-11-20 08:27:13.433379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.664 [2024-11-20 08:27:13.433549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.664 [2024-11-20 08:27:13.433558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.664 [2024-11-20 08:27:13.433565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.664 [2024-11-20 08:27:13.433571] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.664 [2024-11-20 08:27:13.445507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.664 [2024-11-20 08:27:13.445860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.664 [2024-11-20 08:27:13.445877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.664 [2024-11-20 08:27:13.445887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.664 [2024-11-20 08:27:13.446046] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.664 [2024-11-20 08:27:13.446210] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.664 [2024-11-20 08:27:13.446220] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.664 [2024-11-20 08:27:13.446227] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.664 [2024-11-20 08:27:13.446234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.664 [2024-11-20 08:27:13.458262] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.664 [2024-11-20 08:27:13.458674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.664 [2024-11-20 08:27:13.458690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.664 [2024-11-20 08:27:13.458698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.664 [2024-11-20 08:27:13.458856] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.664 [2024-11-20 08:27:13.459015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.664 [2024-11-20 08:27:13.459024] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.664 [2024-11-20 08:27:13.459031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.664 [2024-11-20 08:27:13.459037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.664 [2024-11-20 08:27:13.471054] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.664 [2024-11-20 08:27:13.471480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.664 [2024-11-20 08:27:13.471525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.664 [2024-11-20 08:27:13.471550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.664 [2024-11-20 08:27:13.472072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.664 [2024-11-20 08:27:13.472254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.664 [2024-11-20 08:27:13.472265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.664 [2024-11-20 08:27:13.472272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.664 [2024-11-20 08:27:13.472278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.664 [2024-11-20 08:27:13.483878] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.664 [2024-11-20 08:27:13.484270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.665 [2024-11-20 08:27:13.484287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.665 [2024-11-20 08:27:13.484295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.665 [2024-11-20 08:27:13.484454] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.665 [2024-11-20 08:27:13.484616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.665 [2024-11-20 08:27:13.484626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.665 [2024-11-20 08:27:13.484632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.665 [2024-11-20 08:27:13.484638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.665 [2024-11-20 08:27:13.496598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.665 [2024-11-20 08:27:13.497012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.665 [2024-11-20 08:27:13.497029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.665 [2024-11-20 08:27:13.497036] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.665 [2024-11-20 08:27:13.497195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.665 [2024-11-20 08:27:13.497381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.665 [2024-11-20 08:27:13.497391] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.665 [2024-11-20 08:27:13.497398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.665 [2024-11-20 08:27:13.497404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.665 [2024-11-20 08:27:13.509552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.665 [2024-11-20 08:27:13.509982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.665 [2024-11-20 08:27:13.509999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.665 [2024-11-20 08:27:13.510007] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.665 [2024-11-20 08:27:13.510179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.665 [2024-11-20 08:27:13.510356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.665 [2024-11-20 08:27:13.510367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.665 [2024-11-20 08:27:13.510374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.665 [2024-11-20 08:27:13.510382] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.665 [2024-11-20 08:27:13.522513] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.665 [2024-11-20 08:27:13.522865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.665 [2024-11-20 08:27:13.522883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.665 [2024-11-20 08:27:13.522891] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.665 [2024-11-20 08:27:13.523062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.665 [2024-11-20 08:27:13.523240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.665 [2024-11-20 08:27:13.523250] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.665 [2024-11-20 08:27:13.523258] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.665 [2024-11-20 08:27:13.523267] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.665 [2024-11-20 08:27:13.535411] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.665 [2024-11-20 08:27:13.535761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.665 [2024-11-20 08:27:13.535778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.665 [2024-11-20 08:27:13.535787] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.665 [2024-11-20 08:27:13.535954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.665 [2024-11-20 08:27:13.536122] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.665 [2024-11-20 08:27:13.536131] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.665 [2024-11-20 08:27:13.536138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.665 [2024-11-20 08:27:13.536145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.665 [2024-11-20 08:27:13.548326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.665 [2024-11-20 08:27:13.548732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.665 [2024-11-20 08:27:13.548751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.665 [2024-11-20 08:27:13.548759] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.665 [2024-11-20 08:27:13.548928] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.665 [2024-11-20 08:27:13.549096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.665 [2024-11-20 08:27:13.549106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.665 [2024-11-20 08:27:13.549112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.665 [2024-11-20 08:27:13.549119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.665 [2024-11-20 08:27:13.561184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.665 [2024-11-20 08:27:13.561617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.665 [2024-11-20 08:27:13.561636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.665 [2024-11-20 08:27:13.561644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.665 [2024-11-20 08:27:13.561818] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.665 [2024-11-20 08:27:13.561991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.665 [2024-11-20 08:27:13.562000] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.665 [2024-11-20 08:27:13.562007] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.665 [2024-11-20 08:27:13.562014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.665 [2024-11-20 08:27:13.574147] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.665 [2024-11-20 08:27:13.574566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.665 [2024-11-20 08:27:13.574584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.665 [2024-11-20 08:27:13.574592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.665 [2024-11-20 08:27:13.574764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.665 [2024-11-20 08:27:13.574936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.665 [2024-11-20 08:27:13.574946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.665 [2024-11-20 08:27:13.574953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.665 [2024-11-20 08:27:13.574960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.665 [2024-11-20 08:27:13.587244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.665 [2024-11-20 08:27:13.587603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.665 [2024-11-20 08:27:13.587622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.665 [2024-11-20 08:27:13.587630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.665 [2024-11-20 08:27:13.587802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.665 [2024-11-20 08:27:13.587974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.665 [2024-11-20 08:27:13.587983] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.665 [2024-11-20 08:27:13.587990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.665 [2024-11-20 08:27:13.587997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.665 [2024-11-20 08:27:13.600082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.665 [2024-11-20 08:27:13.600480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.665 [2024-11-20 08:27:13.600497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.665 [2024-11-20 08:27:13.600505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.665 [2024-11-20 08:27:13.600663] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.665 [2024-11-20 08:27:13.600822] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.665 [2024-11-20 08:27:13.600831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.665 [2024-11-20 08:27:13.600837] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.665 [2024-11-20 08:27:13.600843] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.666 [2024-11-20 08:27:13.612831] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.666 [2024-11-20 08:27:13.613249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.666 [2024-11-20 08:27:13.613267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.666 [2024-11-20 08:27:13.613278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.666 [2024-11-20 08:27:13.613437] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.666 [2024-11-20 08:27:13.613596] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.666 [2024-11-20 08:27:13.613606] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.666 [2024-11-20 08:27:13.613612] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.666 [2024-11-20 08:27:13.613618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.666 [2024-11-20 08:27:13.625748] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.666 [2024-11-20 08:27:13.626076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.666 [2024-11-20 08:27:13.626094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.666 [2024-11-20 08:27:13.626102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.666 [2024-11-20 08:27:13.626265] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.666 [2024-11-20 08:27:13.626425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.666 [2024-11-20 08:27:13.626434] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.666 [2024-11-20 08:27:13.626441] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.666 [2024-11-20 08:27:13.626447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.666 7638.25 IOPS, 29.84 MiB/s [2024-11-20T07:27:13.694Z] [2024-11-20 08:27:13.638602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.666 [2024-11-20 08:27:13.639031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.666 [2024-11-20 08:27:13.639078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.666 [2024-11-20 08:27:13.639102] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.666 [2024-11-20 08:27:13.639520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.666 [2024-11-20 08:27:13.639693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.666 [2024-11-20 08:27:13.639702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.666 [2024-11-20 08:27:13.639709] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.666 [2024-11-20 08:27:13.639715] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.666 [2024-11-20 08:27:13.653472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.666 [2024-11-20 08:27:13.653990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.666 [2024-11-20 08:27:13.654013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.666 [2024-11-20 08:27:13.654024] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.666 [2024-11-20 08:27:13.654284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.666 [2024-11-20 08:27:13.654546] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.666 [2024-11-20 08:27:13.654559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.666 [2024-11-20 08:27:13.654570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.666 [2024-11-20 08:27:13.654581] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.666 [2024-11-20 08:27:13.666478] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.666 [2024-11-20 08:27:13.666876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.666 [2024-11-20 08:27:13.666894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.666 [2024-11-20 08:27:13.666901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.666 [2024-11-20 08:27:13.667068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.666 [2024-11-20 08:27:13.667242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.666 [2024-11-20 08:27:13.667253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.666 [2024-11-20 08:27:13.667259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.666 [2024-11-20 08:27:13.667266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.666 [2024-11-20 08:27:13.679380] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.666 [2024-11-20 08:27:13.679790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.666 [2024-11-20 08:27:13.679830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.666 [2024-11-20 08:27:13.679856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.666 [2024-11-20 08:27:13.680449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.666 [2024-11-20 08:27:13.680713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.666 [2024-11-20 08:27:13.680723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.666 [2024-11-20 08:27:13.680729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.666 [2024-11-20 08:27:13.680735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.925 [2024-11-20 08:27:13.692547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.925 [2024-11-20 08:27:13.692839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.925 [2024-11-20 08:27:13.692856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.925 [2024-11-20 08:27:13.692864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.925 [2024-11-20 08:27:13.693037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.926 [2024-11-20 08:27:13.693217] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.926 [2024-11-20 08:27:13.693228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.926 [2024-11-20 08:27:13.693238] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.926 [2024-11-20 08:27:13.693246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.926 [2024-11-20 08:27:13.705437] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.926 [2024-11-20 08:27:13.705787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.926 [2024-11-20 08:27:13.705831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.926 [2024-11-20 08:27:13.705855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.926 [2024-11-20 08:27:13.706387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.926 [2024-11-20 08:27:13.706548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.926 [2024-11-20 08:27:13.706557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.926 [2024-11-20 08:27:13.706563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.926 [2024-11-20 08:27:13.706570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.926 [2024-11-20 08:27:13.718301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.926 [2024-11-20 08:27:13.718672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.926 [2024-11-20 08:27:13.718690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.926 [2024-11-20 08:27:13.718697] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.926 [2024-11-20 08:27:13.718854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.926 [2024-11-20 08:27:13.719014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.926 [2024-11-20 08:27:13.719023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.926 [2024-11-20 08:27:13.719030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.926 [2024-11-20 08:27:13.719036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.926 [2024-11-20 08:27:13.731057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.926 [2024-11-20 08:27:13.731418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.926 [2024-11-20 08:27:13.731437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.926 [2024-11-20 08:27:13.731444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.926 [2024-11-20 08:27:13.731611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.926 [2024-11-20 08:27:13.731779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.926 [2024-11-20 08:27:13.731790] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.926 [2024-11-20 08:27:13.731796] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.926 [2024-11-20 08:27:13.731802] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.926 [2024-11-20 08:27:13.743972] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.926 [2024-11-20 08:27:13.744251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.926 [2024-11-20 08:27:13.744270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.926 [2024-11-20 08:27:13.744278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.926 [2024-11-20 08:27:13.744446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.926 [2024-11-20 08:27:13.744616] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.926 [2024-11-20 08:27:13.744625] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.926 [2024-11-20 08:27:13.744631] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.926 [2024-11-20 08:27:13.744637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.926 [2024-11-20 08:27:13.756820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.926 [2024-11-20 08:27:13.757149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.926 [2024-11-20 08:27:13.757167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.926 [2024-11-20 08:27:13.757175] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.926 [2024-11-20 08:27:13.757348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.926 [2024-11-20 08:27:13.757519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.926 [2024-11-20 08:27:13.757529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.926 [2024-11-20 08:27:13.757536] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.926 [2024-11-20 08:27:13.757542] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.926 [2024-11-20 08:27:13.769765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.926 [2024-11-20 08:27:13.770075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.926 [2024-11-20 08:27:13.770092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.926 [2024-11-20 08:27:13.770100] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.926 [2024-11-20 08:27:13.770264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.926 [2024-11-20 08:27:13.770423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.926 [2024-11-20 08:27:13.770432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.926 [2024-11-20 08:27:13.770439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.926 [2024-11-20 08:27:13.770446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.926 [2024-11-20 08:27:13.782574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.926 [2024-11-20 08:27:13.782946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.926 [2024-11-20 08:27:13.782963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.926 [2024-11-20 08:27:13.782974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.926 [2024-11-20 08:27:13.783133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.926 [2024-11-20 08:27:13.783297] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.926 [2024-11-20 08:27:13.783307] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.926 [2024-11-20 08:27:13.783314] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.926 [2024-11-20 08:27:13.783320] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.926 [2024-11-20 08:27:13.795455] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.926 [2024-11-20 08:27:13.795724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.927 [2024-11-20 08:27:13.795741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.927 [2024-11-20 08:27:13.795749] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.927 [2024-11-20 08:27:13.795917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.927 [2024-11-20 08:27:13.796085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.927 [2024-11-20 08:27:13.796095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.927 [2024-11-20 08:27:13.796101] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.927 [2024-11-20 08:27:13.796108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.927 [2024-11-20 08:27:13.808275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.927 [2024-11-20 08:27:13.808615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.927 [2024-11-20 08:27:13.808632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.927 [2024-11-20 08:27:13.808639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.927 [2024-11-20 08:27:13.808797] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.927 [2024-11-20 08:27:13.808957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.927 [2024-11-20 08:27:13.808966] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.927 [2024-11-20 08:27:13.808973] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.927 [2024-11-20 08:27:13.808979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.927 [2024-11-20 08:27:13.821163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.927 [2024-11-20 08:27:13.821553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.927 [2024-11-20 08:27:13.821572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.927 [2024-11-20 08:27:13.821580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.927 [2024-11-20 08:27:13.821746] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.927 [2024-11-20 08:27:13.821918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.927 [2024-11-20 08:27:13.821928] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.927 [2024-11-20 08:27:13.821934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.927 [2024-11-20 08:27:13.821942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.927 [2024-11-20 08:27:13.834128] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.927 [2024-11-20 08:27:13.834519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.927 [2024-11-20 08:27:13.834537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.927 [2024-11-20 08:27:13.834545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.927 [2024-11-20 08:27:13.834716] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.927 [2024-11-20 08:27:13.834889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.927 [2024-11-20 08:27:13.834898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.927 [2024-11-20 08:27:13.834905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.927 [2024-11-20 08:27:13.834911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.927 [2024-11-20 08:27:13.847087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.927 [2024-11-20 08:27:13.847419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.927 [2024-11-20 08:27:13.847464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.927 [2024-11-20 08:27:13.847489] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.927 [2024-11-20 08:27:13.847999] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.927 [2024-11-20 08:27:13.848160] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.927 [2024-11-20 08:27:13.848169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.927 [2024-11-20 08:27:13.848175] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.927 [2024-11-20 08:27:13.848182] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.927 [2024-11-20 08:27:13.860095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.927 [2024-11-20 08:27:13.860423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.927 [2024-11-20 08:27:13.860441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.927 [2024-11-20 08:27:13.860448] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.927 [2024-11-20 08:27:13.860606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.927 [2024-11-20 08:27:13.860766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.927 [2024-11-20 08:27:13.860776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.927 [2024-11-20 08:27:13.860786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.927 [2024-11-20 08:27:13.860792] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.927 [2024-11-20 08:27:13.872912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.927 [2024-11-20 08:27:13.873249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.927 [2024-11-20 08:27:13.873266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.927 [2024-11-20 08:27:13.873274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.927 [2024-11-20 08:27:13.873433] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.927 [2024-11-20 08:27:13.873592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.927 [2024-11-20 08:27:13.873602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.927 [2024-11-20 08:27:13.873608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.927 [2024-11-20 08:27:13.873615] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.927 [2024-11-20 08:27:13.885854] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.927 [2024-11-20 08:27:13.886186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.927 [2024-11-20 08:27:13.886210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.927 [2024-11-20 08:27:13.886218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.927 [2024-11-20 08:27:13.886395] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.927 [2024-11-20 08:27:13.886555] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.927 [2024-11-20 08:27:13.886564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.927 [2024-11-20 08:27:13.886570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.927 [2024-11-20 08:27:13.886576] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.927 [2024-11-20 08:27:13.898780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.927 [2024-11-20 08:27:13.899046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.927 [2024-11-20 08:27:13.899063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.927 [2024-11-20 08:27:13.899070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.927 [2024-11-20 08:27:13.899250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.927 [2024-11-20 08:27:13.899418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.927 [2024-11-20 08:27:13.899428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.928 [2024-11-20 08:27:13.899435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.928 [2024-11-20 08:27:13.899442] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.928 [2024-11-20 08:27:13.911626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.928 [2024-11-20 08:27:13.912025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.928 [2024-11-20 08:27:13.912042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.928 [2024-11-20 08:27:13.912050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.928 [2024-11-20 08:27:13.912225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.928 [2024-11-20 08:27:13.912395] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.928 [2024-11-20 08:27:13.912404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.928 [2024-11-20 08:27:13.912411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.928 [2024-11-20 08:27:13.912417] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.928 [2024-11-20 08:27:13.924544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.928 [2024-11-20 08:27:13.925023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.928 [2024-11-20 08:27:13.925069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.928 [2024-11-20 08:27:13.925094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.928 [2024-11-20 08:27:13.925687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.928 [2024-11-20 08:27:13.926141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.928 [2024-11-20 08:27:13.926150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.928 [2024-11-20 08:27:13.926156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.928 [2024-11-20 08:27:13.926162] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:29:59.928 [2024-11-20 08:27:13.937492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:29:59.928 [2024-11-20 08:27:13.937825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:59.928 [2024-11-20 08:27:13.937843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:29:59.928 [2024-11-20 08:27:13.937850] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:29:59.928 [2024-11-20 08:27:13.938009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:29:59.928 [2024-11-20 08:27:13.938168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:29:59.928 [2024-11-20 08:27:13.938178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:29:59.928 [2024-11-20 08:27:13.938184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:29:59.928 [2024-11-20 08:27:13.938190] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.188 [2024-11-20 08:27:13.950463] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.188 [2024-11-20 08:27:13.950824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.188 [2024-11-20 08:27:13.950842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.188 [2024-11-20 08:27:13.950854] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.188 [2024-11-20 08:27:13.951026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.188 [2024-11-20 08:27:13.951199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.188 [2024-11-20 08:27:13.951215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.188 [2024-11-20 08:27:13.951222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.188 [2024-11-20 08:27:13.951229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.188 [2024-11-20 08:27:13.963518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.188 [2024-11-20 08:27:13.963842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.188 [2024-11-20 08:27:13.963860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.188 [2024-11-20 08:27:13.963868] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.188 [2024-11-20 08:27:13.964042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.188 [2024-11-20 08:27:13.964223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.188 [2024-11-20 08:27:13.964234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.188 [2024-11-20 08:27:13.964241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.188 [2024-11-20 08:27:13.964248] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.188 [2024-11-20 08:27:13.976557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.188 [2024-11-20 08:27:13.976899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.188 [2024-11-20 08:27:13.976916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.188 [2024-11-20 08:27:13.976925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.188 [2024-11-20 08:27:13.977097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.188 [2024-11-20 08:27:13.977277] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.188 [2024-11-20 08:27:13.977287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.188 [2024-11-20 08:27:13.977294] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.188 [2024-11-20 08:27:13.977301] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.188 [2024-11-20 08:27:13.989632] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.188 [2024-11-20 08:27:13.990060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.188 [2024-11-20 08:27:13.990078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.188 [2024-11-20 08:27:13.990086] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.188 [2024-11-20 08:27:13.990263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.188 [2024-11-20 08:27:13.990440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.188 [2024-11-20 08:27:13.990450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.188 [2024-11-20 08:27:13.990457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.188 [2024-11-20 08:27:13.990464] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.188 [2024-11-20 08:27:14.002708] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.188 [2024-11-20 08:27:14.003146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.188 [2024-11-20 08:27:14.003164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.188 [2024-11-20 08:27:14.003173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.188 [2024-11-20 08:27:14.003363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.188 [2024-11-20 08:27:14.003548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.188 [2024-11-20 08:27:14.003558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.188 [2024-11-20 08:27:14.003565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.188 [2024-11-20 08:27:14.003573] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.188 [2024-11-20 08:27:14.015815] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.188 [2024-11-20 08:27:14.016197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.188 [2024-11-20 08:27:14.016220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.188 [2024-11-20 08:27:14.016229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.188 [2024-11-20 08:27:14.016413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.189 [2024-11-20 08:27:14.016600] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.189 [2024-11-20 08:27:14.016610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.189 [2024-11-20 08:27:14.016617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.189 [2024-11-20 08:27:14.016623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.189 [2024-11-20 08:27:14.028916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.189 [2024-11-20 08:27:14.029324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.189 [2024-11-20 08:27:14.029342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.189 [2024-11-20 08:27:14.029350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.189 [2024-11-20 08:27:14.029522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.189 [2024-11-20 08:27:14.029695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.189 [2024-11-20 08:27:14.029705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.189 [2024-11-20 08:27:14.029718] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.189 [2024-11-20 08:27:14.029726] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.189 [2024-11-20 08:27:14.042048] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.189 [2024-11-20 08:27:14.042407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.189 [2024-11-20 08:27:14.042425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.189 [2024-11-20 08:27:14.042433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.189 [2024-11-20 08:27:14.042605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.189 [2024-11-20 08:27:14.042778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.189 [2024-11-20 08:27:14.042787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.189 [2024-11-20 08:27:14.042794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.189 [2024-11-20 08:27:14.042801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.189 [2024-11-20 08:27:14.055334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.189 [2024-11-20 08:27:14.055768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.189 [2024-11-20 08:27:14.055786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.189 [2024-11-20 08:27:14.055794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.189 [2024-11-20 08:27:14.055978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.189 [2024-11-20 08:27:14.056161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.189 [2024-11-20 08:27:14.056171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.189 [2024-11-20 08:27:14.056178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.189 [2024-11-20 08:27:14.056185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.189 [2024-11-20 08:27:14.068326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.189 [2024-11-20 08:27:14.068747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.189 [2024-11-20 08:27:14.068785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.189 [2024-11-20 08:27:14.068811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.189 [2024-11-20 08:27:14.069416] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.189 [2024-11-20 08:27:14.069590] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.189 [2024-11-20 08:27:14.069600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.189 [2024-11-20 08:27:14.069607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.189 [2024-11-20 08:27:14.069614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.189 [2024-11-20 08:27:14.081334] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.189 [2024-11-20 08:27:14.081772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.189 [2024-11-20 08:27:14.081790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.189 [2024-11-20 08:27:14.081798] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.189 [2024-11-20 08:27:14.081970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.189 [2024-11-20 08:27:14.082143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.189 [2024-11-20 08:27:14.082153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.189 [2024-11-20 08:27:14.082161] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.189 [2024-11-20 08:27:14.082169] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.189 [2024-11-20 08:27:14.094349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.189 [2024-11-20 08:27:14.094769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.189 [2024-11-20 08:27:14.094786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.189 [2024-11-20 08:27:14.094794] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.189 [2024-11-20 08:27:14.094967] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.189 [2024-11-20 08:27:14.095140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.189 [2024-11-20 08:27:14.095150] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.189 [2024-11-20 08:27:14.095156] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.189 [2024-11-20 08:27:14.095163] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.189 [2024-11-20 08:27:14.107165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.189 [2024-11-20 08:27:14.107586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.189 [2024-11-20 08:27:14.107604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.189 [2024-11-20 08:27:14.107612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.189 [2024-11-20 08:27:14.107780] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.189 [2024-11-20 08:27:14.107948] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.189 [2024-11-20 08:27:14.107958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.189 [2024-11-20 08:27:14.107964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.189 [2024-11-20 08:27:14.107971] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.189 [2024-11-20 08:27:14.120053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.189 [2024-11-20 08:27:14.120471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.189 [2024-11-20 08:27:14.120489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.189 [2024-11-20 08:27:14.120500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.189 [2024-11-20 08:27:14.120660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.189 [2024-11-20 08:27:14.120820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.189 [2024-11-20 08:27:14.120829] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.189 [2024-11-20 08:27:14.120835] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.189 [2024-11-20 08:27:14.120842] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.189 [2024-11-20 08:27:14.132906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.189 [2024-11-20 08:27:14.133261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.189 [2024-11-20 08:27:14.133308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.189 [2024-11-20 08:27:14.133333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.189 [2024-11-20 08:27:14.133808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.189 [2024-11-20 08:27:14.133968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.189 [2024-11-20 08:27:14.133978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.189 [2024-11-20 08:27:14.133984] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.189 [2024-11-20 08:27:14.133991] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.189 [2024-11-20 08:27:14.145758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.189 [2024-11-20 08:27:14.146164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.189 [2024-11-20 08:27:14.146221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.190 [2024-11-20 08:27:14.146247] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.190 [2024-11-20 08:27:14.146689] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.190 [2024-11-20 08:27:14.146850] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.190 [2024-11-20 08:27:14.146859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.190 [2024-11-20 08:27:14.146865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.190 [2024-11-20 08:27:14.146872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.190 [2024-11-20 08:27:14.158493] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.190 [2024-11-20 08:27:14.158905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.190 [2024-11-20 08:27:14.158922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.190 [2024-11-20 08:27:14.158929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.190 [2024-11-20 08:27:14.159088] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.190 [2024-11-20 08:27:14.159272] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.190 [2024-11-20 08:27:14.159282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.190 [2024-11-20 08:27:14.159288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.190 [2024-11-20 08:27:14.159295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.190 [2024-11-20 08:27:14.171338] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.190 [2024-11-20 08:27:14.171746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.190 [2024-11-20 08:27:14.171763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.190 [2024-11-20 08:27:14.171770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.190 [2024-11-20 08:27:14.171929] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.190 [2024-11-20 08:27:14.172088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.190 [2024-11-20 08:27:14.172097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.190 [2024-11-20 08:27:14.172104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.190 [2024-11-20 08:27:14.172112] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.190 [2024-11-20 08:27:14.184178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.190 [2024-11-20 08:27:14.184505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.190 [2024-11-20 08:27:14.184527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.190 [2024-11-20 08:27:14.184535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.190 [2024-11-20 08:27:14.184694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.190 [2024-11-20 08:27:14.184853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.190 [2024-11-20 08:27:14.184862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.190 [2024-11-20 08:27:14.184868] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.190 [2024-11-20 08:27:14.184875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.190 [2024-11-20 08:27:14.196989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.190 [2024-11-20 08:27:14.197343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.190 [2024-11-20 08:27:14.197361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.190 [2024-11-20 08:27:14.197369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.190 [2024-11-20 08:27:14.197536] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.190 [2024-11-20 08:27:14.197704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.190 [2024-11-20 08:27:14.197713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.190 [2024-11-20 08:27:14.197723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.190 [2024-11-20 08:27:14.197730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.190 [2024-11-20 08:27:14.209960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.190 [2024-11-20 08:27:14.210325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.190 [2024-11-20 08:27:14.210344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.190 [2024-11-20 08:27:14.210352] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.190 [2024-11-20 08:27:14.210524] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.451 [2024-11-20 08:27:14.210696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.451 [2024-11-20 08:27:14.210707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.451 [2024-11-20 08:27:14.210716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.451 [2024-11-20 08:27:14.210723] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.451 [2024-11-20 08:27:14.222779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.451 [2024-11-20 08:27:14.223200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.451 [2024-11-20 08:27:14.223259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.451 [2024-11-20 08:27:14.223283] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.451 [2024-11-20 08:27:14.223705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.451 [2024-11-20 08:27:14.223865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.451 [2024-11-20 08:27:14.223875] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.451 [2024-11-20 08:27:14.223881] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.451 [2024-11-20 08:27:14.223887] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.451 [2024-11-20 08:27:14.235633] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.451 [2024-11-20 08:27:14.235984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.451 [2024-11-20 08:27:14.236001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.451 [2024-11-20 08:27:14.236010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.451 [2024-11-20 08:27:14.236168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.451 [2024-11-20 08:27:14.236354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.451 [2024-11-20 08:27:14.236364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.451 [2024-11-20 08:27:14.236371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.451 [2024-11-20 08:27:14.236378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.451 [2024-11-20 08:27:14.248472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.451 [2024-11-20 08:27:14.248915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.451 [2024-11-20 08:27:14.248960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.451 [2024-11-20 08:27:14.248985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.451 [2024-11-20 08:27:14.249579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.451 [2024-11-20 08:27:14.250073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.451 [2024-11-20 08:27:14.250083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.451 [2024-11-20 08:27:14.250089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.451 [2024-11-20 08:27:14.250096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.451 [2024-11-20 08:27:14.261317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.451 [2024-11-20 08:27:14.261748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.451 [2024-11-20 08:27:14.261793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.451 [2024-11-20 08:27:14.261817] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.451 [2024-11-20 08:27:14.262409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.451 [2024-11-20 08:27:14.262909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.451 [2024-11-20 08:27:14.262918] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.451 [2024-11-20 08:27:14.262925] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.451 [2024-11-20 08:27:14.262932] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.451 [2024-11-20 08:27:14.274090] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.451 [2024-11-20 08:27:14.274504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.451 [2024-11-20 08:27:14.274522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.451 [2024-11-20 08:27:14.274529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.451 [2024-11-20 08:27:14.274688] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.451 [2024-11-20 08:27:14.274848] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.451 [2024-11-20 08:27:14.274857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.451 [2024-11-20 08:27:14.274863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.451 [2024-11-20 08:27:14.274870] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.451 [2024-11-20 08:27:14.286926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.451 [2024-11-20 08:27:14.287339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.451 [2024-11-20 08:27:14.287355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.451 [2024-11-20 08:27:14.287365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.451 [2024-11-20 08:27:14.287525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.451 [2024-11-20 08:27:14.287683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.451 [2024-11-20 08:27:14.287692] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.451 [2024-11-20 08:27:14.287698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.451 [2024-11-20 08:27:14.287704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.451 [2024-11-20 08:27:14.299761] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.451 [2024-11-20 08:27:14.300177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.451 [2024-11-20 08:27:14.300246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.451 [2024-11-20 08:27:14.300271] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.451 [2024-11-20 08:27:14.300836] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.451 [2024-11-20 08:27:14.300996] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.451 [2024-11-20 08:27:14.301004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.451 [2024-11-20 08:27:14.301010] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.451 [2024-11-20 08:27:14.301016] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.452 [2024-11-20 08:27:14.312598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.452 [2024-11-20 08:27:14.312986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.452 [2024-11-20 08:27:14.313003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.452 [2024-11-20 08:27:14.313011] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.452 [2024-11-20 08:27:14.313170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.452 [2024-11-20 08:27:14.313357] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.452 [2024-11-20 08:27:14.313367] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.452 [2024-11-20 08:27:14.313374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.452 [2024-11-20 08:27:14.313381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.452 [2024-11-20 08:27:14.325412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.452 [2024-11-20 08:27:14.325775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.452 [2024-11-20 08:27:14.325820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.452 [2024-11-20 08:27:14.325844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.452 [2024-11-20 08:27:14.326439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.452 [2024-11-20 08:27:14.327030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.452 [2024-11-20 08:27:14.327057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.452 [2024-11-20 08:27:14.327065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.452 [2024-11-20 08:27:14.327072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.452 [2024-11-20 08:27:14.338319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.452 [2024-11-20 08:27:14.338640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.452 [2024-11-20 08:27:14.338658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.452 [2024-11-20 08:27:14.338666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.452 [2024-11-20 08:27:14.338839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.452 [2024-11-20 08:27:14.339012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.452 [2024-11-20 08:27:14.339022] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.452 [2024-11-20 08:27:14.339029] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.452 [2024-11-20 08:27:14.339036] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.452 [2024-11-20 08:27:14.351351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.452 [2024-11-20 08:27:14.351778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.452 [2024-11-20 08:27:14.351795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.452 [2024-11-20 08:27:14.351804] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.452 [2024-11-20 08:27:14.351977] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.452 [2024-11-20 08:27:14.352149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.452 [2024-11-20 08:27:14.352159] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.452 [2024-11-20 08:27:14.352166] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.452 [2024-11-20 08:27:14.352173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.452 [2024-11-20 08:27:14.364249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.452 [2024-11-20 08:27:14.364681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.452 [2024-11-20 08:27:14.364726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.452 [2024-11-20 08:27:14.364750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.452 [2024-11-20 08:27:14.365276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.452 [2024-11-20 08:27:14.365446] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.452 [2024-11-20 08:27:14.365454] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.452 [2024-11-20 08:27:14.365464] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.452 [2024-11-20 08:27:14.365470] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.452 [2024-11-20 08:27:14.376979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.452 [2024-11-20 08:27:14.377393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.452 [2024-11-20 08:27:14.377411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.452 [2024-11-20 08:27:14.377418] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.452 [2024-11-20 08:27:14.377576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.452 [2024-11-20 08:27:14.377735] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.452 [2024-11-20 08:27:14.377744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.452 [2024-11-20 08:27:14.377751] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.452 [2024-11-20 08:27:14.377757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.452 [2024-11-20 08:27:14.389817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.452 [2024-11-20 08:27:14.390229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.452 [2024-11-20 08:27:14.390246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.452 [2024-11-20 08:27:14.390254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.452 [2024-11-20 08:27:14.390432] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.452 [2024-11-20 08:27:14.390601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.452 [2024-11-20 08:27:14.390611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.452 [2024-11-20 08:27:14.390617] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.452 [2024-11-20 08:27:14.390623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.452 [2024-11-20 08:27:14.402592] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.452 [2024-11-20 08:27:14.403040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.452 [2024-11-20 08:27:14.403085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.452 [2024-11-20 08:27:14.403109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.452 [2024-11-20 08:27:14.403705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.452 [2024-11-20 08:27:14.404266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.452 [2024-11-20 08:27:14.404276] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.452 [2024-11-20 08:27:14.404283] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.452 [2024-11-20 08:27:14.404289] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.452 [2024-11-20 08:27:14.415607] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.452 [2024-11-20 08:27:14.416043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.452 [2024-11-20 08:27:14.416062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.452 [2024-11-20 08:27:14.416070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.452 [2024-11-20 08:27:14.416247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.452 [2024-11-20 08:27:14.416421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.452 [2024-11-20 08:27:14.416432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.452 [2024-11-20 08:27:14.416438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.452 [2024-11-20 08:27:14.416445] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.452 [2024-11-20 08:27:14.428577] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.452 [2024-11-20 08:27:14.429010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.452 [2024-11-20 08:27:14.429055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.452 [2024-11-20 08:27:14.429079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.452 [2024-11-20 08:27:14.429569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.452 [2024-11-20 08:27:14.429743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.452 [2024-11-20 08:27:14.429751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.452 [2024-11-20 08:27:14.429758] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.452 [2024-11-20 08:27:14.429764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.453 [2024-11-20 08:27:14.441489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.453 [2024-11-20 08:27:14.441911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.453 [2024-11-20 08:27:14.441928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.453 [2024-11-20 08:27:14.441937] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.453 [2024-11-20 08:27:14.442104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.453 [2024-11-20 08:27:14.442278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.453 [2024-11-20 08:27:14.442288] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.453 [2024-11-20 08:27:14.442295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.453 [2024-11-20 08:27:14.442302] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.453 [2024-11-20 08:27:14.454212] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.453 [2024-11-20 08:27:14.454620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.453 [2024-11-20 08:27:14.454637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.453 [2024-11-20 08:27:14.454648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.453 [2024-11-20 08:27:14.454807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.453 [2024-11-20 08:27:14.454966] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.453 [2024-11-20 08:27:14.454976] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.453 [2024-11-20 08:27:14.454982] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.453 [2024-11-20 08:27:14.454989] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.453 [2024-11-20 08:27:14.466997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.453 [2024-11-20 08:27:14.467419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.453 [2024-11-20 08:27:14.467467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.453 [2024-11-20 08:27:14.467491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.453 [2024-11-20 08:27:14.468072] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.453 [2024-11-20 08:27:14.468489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.453 [2024-11-20 08:27:14.468499] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.453 [2024-11-20 08:27:14.468506] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.453 [2024-11-20 08:27:14.468512] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.713 [2024-11-20 08:27:14.479884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.713 [2024-11-20 08:27:14.480289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.713 [2024-11-20 08:27:14.480307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.713 [2024-11-20 08:27:14.480315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.713 [2024-11-20 08:27:14.480488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.713 [2024-11-20 08:27:14.480647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.713 [2024-11-20 08:27:14.480657] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.713 [2024-11-20 08:27:14.480664] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.713 [2024-11-20 08:27:14.480670] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.713 [2024-11-20 08:27:14.492824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.713 [2024-11-20 08:27:14.493213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.713 [2024-11-20 08:27:14.493254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.713 [2024-11-20 08:27:14.493280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.713 [2024-11-20 08:27:14.493858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.713 [2024-11-20 08:27:14.494463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.713 [2024-11-20 08:27:14.494492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.713 [2024-11-20 08:27:14.494512] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.713 [2024-11-20 08:27:14.494532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.713 [2024-11-20 08:27:14.505615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.713 [2024-11-20 08:27:14.506018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.713 [2024-11-20 08:27:14.506063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.713 [2024-11-20 08:27:14.506087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.713 [2024-11-20 08:27:14.506653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.713 [2024-11-20 08:27:14.506823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.713 [2024-11-20 08:27:14.506832] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.713 [2024-11-20 08:27:14.506838] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.713 [2024-11-20 08:27:14.506845] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.713 [2024-11-20 08:27:14.518415] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.713 [2024-11-20 08:27:14.518813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.713 [2024-11-20 08:27:14.518858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.713 [2024-11-20 08:27:14.518882] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.713 [2024-11-20 08:27:14.519327] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.713 [2024-11-20 08:27:14.519497] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.713 [2024-11-20 08:27:14.519507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.713 [2024-11-20 08:27:14.519529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.713 [2024-11-20 08:27:14.519545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.713 [2024-11-20 08:27:14.533332] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.713 [2024-11-20 08:27:14.533846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.713 [2024-11-20 08:27:14.533869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.713 [2024-11-20 08:27:14.533880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.713 [2024-11-20 08:27:14.534134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.713 [2024-11-20 08:27:14.534397] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.713 [2024-11-20 08:27:14.534411] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.713 [2024-11-20 08:27:14.534426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.713 [2024-11-20 08:27:14.534436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.713 [2024-11-20 08:27:14.546461] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.713 [2024-11-20 08:27:14.546893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.713 [2024-11-20 08:27:14.546912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.713 [2024-11-20 08:27:14.546920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.713 [2024-11-20 08:27:14.547093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.713 [2024-11-20 08:27:14.547273] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.713 [2024-11-20 08:27:14.547284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.713 [2024-11-20 08:27:14.547291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.713 [2024-11-20 08:27:14.547298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.713 [2024-11-20 08:27:14.559242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.714 [2024-11-20 08:27:14.559652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.714 [2024-11-20 08:27:14.559694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.714 [2024-11-20 08:27:14.559721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.714 [2024-11-20 08:27:14.560314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.714 [2024-11-20 08:27:14.560521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.714 [2024-11-20 08:27:14.560531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.714 [2024-11-20 08:27:14.560537] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.714 [2024-11-20 08:27:14.560544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.714 [2024-11-20 08:27:14.573997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.714 [2024-11-20 08:27:14.574500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.714 [2024-11-20 08:27:14.574524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.714 [2024-11-20 08:27:14.574534] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.714 [2024-11-20 08:27:14.574789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.714 [2024-11-20 08:27:14.575044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.714 [2024-11-20 08:27:14.575057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.714 [2024-11-20 08:27:14.575067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.714 [2024-11-20 08:27:14.575077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.714 [2024-11-20 08:27:14.587095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.714 [2024-11-20 08:27:14.587456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.714 [2024-11-20 08:27:14.587474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.714 [2024-11-20 08:27:14.587482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.714 [2024-11-20 08:27:14.587655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.714 [2024-11-20 08:27:14.587828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.714 [2024-11-20 08:27:14.587838] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.714 [2024-11-20 08:27:14.587844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.714 [2024-11-20 08:27:14.587851] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.714 [2024-11-20 08:27:14.600130] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.714 [2024-11-20 08:27:14.600560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.714 [2024-11-20 08:27:14.600578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.714 [2024-11-20 08:27:14.600586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.714 [2024-11-20 08:27:14.600760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.714 [2024-11-20 08:27:14.600933] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.714 [2024-11-20 08:27:14.600943] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.714 [2024-11-20 08:27:14.600949] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.714 [2024-11-20 08:27:14.600956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.714 [2024-11-20 08:27:14.613082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.714 [2024-11-20 08:27:14.613486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.714 [2024-11-20 08:27:14.613504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.714 [2024-11-20 08:27:14.613512] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.714 [2024-11-20 08:27:14.613685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.714 [2024-11-20 08:27:14.613859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.714 [2024-11-20 08:27:14.613868] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.714 [2024-11-20 08:27:14.613875] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.714 [2024-11-20 08:27:14.613882] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.714 [2024-11-20 08:27:14.626135] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.714 [2024-11-20 08:27:14.626539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.714 [2024-11-20 08:27:14.626558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.714 [2024-11-20 08:27:14.626569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.714 [2024-11-20 08:27:14.626737] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.714 [2024-11-20 08:27:14.626905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.714 [2024-11-20 08:27:14.626915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.714 [2024-11-20 08:27:14.626924] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.714 [2024-11-20 08:27:14.626931] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.714 6110.60 IOPS, 23.87 MiB/s [2024-11-20T07:27:14.742Z] [2024-11-20 08:27:14.639070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.714 [2024-11-20 08:27:14.639494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.714 [2024-11-20 08:27:14.639512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.714 [2024-11-20 08:27:14.639520] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.714 [2024-11-20 08:27:14.639687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.714 [2024-11-20 08:27:14.639855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.714 [2024-11-20 08:27:14.639865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.714 [2024-11-20 08:27:14.639872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.714 [2024-11-20 08:27:14.639879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.714 [2024-11-20 08:27:14.652034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.714 [2024-11-20 08:27:14.652437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.714 [2024-11-20 08:27:14.652455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.714 [2024-11-20 08:27:14.652463] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.714 [2024-11-20 08:27:14.652631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.714 [2024-11-20 08:27:14.652799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.714 [2024-11-20 08:27:14.652809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.714 [2024-11-20 08:27:14.652815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.714 [2024-11-20 08:27:14.652822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.714 [2024-11-20 08:27:14.664927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.714 [2024-11-20 08:27:14.665327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.714 [2024-11-20 08:27:14.665346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.714 [2024-11-20 08:27:14.665354] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.714 [2024-11-20 08:27:14.665526] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.714 [2024-11-20 08:27:14.665707] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.714 [2024-11-20 08:27:14.665717] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.714 [2024-11-20 08:27:14.665724] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.714 [2024-11-20 08:27:14.665730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.714 [2024-11-20 08:27:14.677907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.714 [2024-11-20 08:27:14.678326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.714 [2024-11-20 08:27:14.678345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.714 [2024-11-20 08:27:14.678353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.714 [2024-11-20 08:27:14.678521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.714 [2024-11-20 08:27:14.678689] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.714 [2024-11-20 08:27:14.678699] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.714 [2024-11-20 08:27:14.678705] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.715 [2024-11-20 08:27:14.678712] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.715 [2024-11-20 08:27:14.690620] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.715 [2024-11-20 08:27:14.690974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.715 [2024-11-20 08:27:14.690990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.715 [2024-11-20 08:27:14.690998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.715 [2024-11-20 08:27:14.691156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.715 [2024-11-20 08:27:14.691342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.715 [2024-11-20 08:27:14.691352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.715 [2024-11-20 08:27:14.691359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.715 [2024-11-20 08:27:14.691366] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.715 [2024-11-20 08:27:14.703346] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.715 [2024-11-20 08:27:14.703759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.715 [2024-11-20 08:27:14.703776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.715 [2024-11-20 08:27:14.703784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.715 [2024-11-20 08:27:14.703942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.715 [2024-11-20 08:27:14.704100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.715 [2024-11-20 08:27:14.704110] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.715 [2024-11-20 08:27:14.704119] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.715 [2024-11-20 08:27:14.704126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.715 [2024-11-20 08:27:14.716100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.715 [2024-11-20 08:27:14.716529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.715 [2024-11-20 08:27:14.716576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.715 [2024-11-20 08:27:14.716600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.715 [2024-11-20 08:27:14.717132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.715 [2024-11-20 08:27:14.717319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.715 [2024-11-20 08:27:14.717329] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.715 [2024-11-20 08:27:14.717336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.715 [2024-11-20 08:27:14.717343] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.715 [2024-11-20 08:27:14.728935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.715 [2024-11-20 08:27:14.729288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.715 [2024-11-20 08:27:14.729306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.715 [2024-11-20 08:27:14.729313] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.715 [2024-11-20 08:27:14.729473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.715 [2024-11-20 08:27:14.729631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.715 [2024-11-20 08:27:14.729641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.715 [2024-11-20 08:27:14.729647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.715 [2024-11-20 08:27:14.729653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.976 [2024-11-20 08:27:14.741700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.976 [2024-11-20 08:27:14.742124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.976 [2024-11-20 08:27:14.742168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.976 [2024-11-20 08:27:14.742193] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.976 [2024-11-20 08:27:14.742793] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.976 [2024-11-20 08:27:14.742963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.976 [2024-11-20 08:27:14.742973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.976 [2024-11-20 08:27:14.742979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.976 [2024-11-20 08:27:14.742986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.976 [2024-11-20 08:27:14.754475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.976 [2024-11-20 08:27:14.754827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.976 [2024-11-20 08:27:14.754843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.976 [2024-11-20 08:27:14.754851] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.976 [2024-11-20 08:27:14.755010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.976 [2024-11-20 08:27:14.755169] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.976 [2024-11-20 08:27:14.755178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.976 [2024-11-20 08:27:14.755185] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.976 [2024-11-20 08:27:14.755191] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.976 [2024-11-20 08:27:14.767208] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.976 [2024-11-20 08:27:14.767619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.976 [2024-11-20 08:27:14.767637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.976 [2024-11-20 08:27:14.767644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.976 [2024-11-20 08:27:14.767803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.976 [2024-11-20 08:27:14.767962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.976 [2024-11-20 08:27:14.767972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.976 [2024-11-20 08:27:14.767978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.976 [2024-11-20 08:27:14.767984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.976 [2024-11-20 08:27:14.780044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.976 [2024-11-20 08:27:14.780456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.976 [2024-11-20 08:27:14.780473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.976 [2024-11-20 08:27:14.780480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.976 [2024-11-20 08:27:14.780639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.976 [2024-11-20 08:27:14.780797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.976 [2024-11-20 08:27:14.780807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.976 [2024-11-20 08:27:14.780813] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.976 [2024-11-20 08:27:14.780819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.976 [2024-11-20 08:27:14.792885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.976 [2024-11-20 08:27:14.793275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.976 [2024-11-20 08:27:14.793293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.976 [2024-11-20 08:27:14.793304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.976 [2024-11-20 08:27:14.793463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.976 [2024-11-20 08:27:14.793621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.976 [2024-11-20 08:27:14.793631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.976 [2024-11-20 08:27:14.793637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.976 [2024-11-20 08:27:14.793643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.976 [2024-11-20 08:27:14.805729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.976 [2024-11-20 08:27:14.806142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.976 [2024-11-20 08:27:14.806158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.976 [2024-11-20 08:27:14.806166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.976 [2024-11-20 08:27:14.806353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.976 [2024-11-20 08:27:14.806521] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.976 [2024-11-20 08:27:14.806531] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.976 [2024-11-20 08:27:14.806538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.976 [2024-11-20 08:27:14.806545] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.976 [2024-11-20 08:27:14.818580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.976 [2024-11-20 08:27:14.818988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.976 [2024-11-20 08:27:14.819005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.976 [2024-11-20 08:27:14.819012] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.976 [2024-11-20 08:27:14.819170] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.976 [2024-11-20 08:27:14.819358] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.976 [2024-11-20 08:27:14.819368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.976 [2024-11-20 08:27:14.819374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.976 [2024-11-20 08:27:14.819381] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.976 [2024-11-20 08:27:14.831359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.976 [2024-11-20 08:27:14.831775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.976 [2024-11-20 08:27:14.831792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.976 [2024-11-20 08:27:14.831800] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.976 [2024-11-20 08:27:14.831958] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.976 [2024-11-20 08:27:14.832119] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.976 [2024-11-20 08:27:14.832129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.976 [2024-11-20 08:27:14.832135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.976 [2024-11-20 08:27:14.832141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.976 [2024-11-20 08:27:14.844289] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.976 [2024-11-20 08:27:14.844696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.977 [2024-11-20 08:27:14.844713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.977 [2024-11-20 08:27:14.844721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.977 [2024-11-20 08:27:14.844880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.977 [2024-11-20 08:27:14.845039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.977 [2024-11-20 08:27:14.845048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.977 [2024-11-20 08:27:14.845055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.977 [2024-11-20 08:27:14.845061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.977 [2024-11-20 08:27:14.857053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.977 [2024-11-20 08:27:14.857477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.977 [2024-11-20 08:27:14.857496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.977 [2024-11-20 08:27:14.857505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.977 [2024-11-20 08:27:14.857673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.977 [2024-11-20 08:27:14.857841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.977 [2024-11-20 08:27:14.857851] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.977 [2024-11-20 08:27:14.857858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.977 [2024-11-20 08:27:14.857864] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.977 [2024-11-20 08:27:14.870006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.977 [2024-11-20 08:27:14.870410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.977 [2024-11-20 08:27:14.870428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.977 [2024-11-20 08:27:14.870437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.977 [2024-11-20 08:27:14.870610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.977 [2024-11-20 08:27:14.870782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.977 [2024-11-20 08:27:14.870792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.977 [2024-11-20 08:27:14.870803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.977 [2024-11-20 08:27:14.870810] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.977 [2024-11-20 08:27:14.882991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.977 [2024-11-20 08:27:14.883418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.977 [2024-11-20 08:27:14.883437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.977 [2024-11-20 08:27:14.883444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.977 [2024-11-20 08:27:14.883622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.977 [2024-11-20 08:27:14.883790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.977 [2024-11-20 08:27:14.883800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.977 [2024-11-20 08:27:14.883807] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.977 [2024-11-20 08:27:14.883814] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.977 [2024-11-20 08:27:14.895769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.977 [2024-11-20 08:27:14.896158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.977 [2024-11-20 08:27:14.896175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.977 [2024-11-20 08:27:14.896182] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.977 [2024-11-20 08:27:14.896368] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.977 [2024-11-20 08:27:14.896536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.977 [2024-11-20 08:27:14.896545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.977 [2024-11-20 08:27:14.896553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.977 [2024-11-20 08:27:14.896560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.977 [2024-11-20 08:27:14.908555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.977 [2024-11-20 08:27:14.908942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.977 [2024-11-20 08:27:14.908959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.977 [2024-11-20 08:27:14.908968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.977 [2024-11-20 08:27:14.909126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.977 [2024-11-20 08:27:14.909311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.977 [2024-11-20 08:27:14.909321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.977 [2024-11-20 08:27:14.909328] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.977 [2024-11-20 08:27:14.909334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.977 [2024-11-20 08:27:14.921321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.977 [2024-11-20 08:27:14.921719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.977 [2024-11-20 08:27:14.921736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.977 [2024-11-20 08:27:14.921744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.977 [2024-11-20 08:27:14.921902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.977 [2024-11-20 08:27:14.922061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.977 [2024-11-20 08:27:14.922071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.977 [2024-11-20 08:27:14.922077] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.977 [2024-11-20 08:27:14.922083] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.977 [2024-11-20 08:27:14.934086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.977 [2024-11-20 08:27:14.934476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.977 [2024-11-20 08:27:14.934524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.977 [2024-11-20 08:27:14.934549] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.977 [2024-11-20 08:27:14.935129] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.977 [2024-11-20 08:27:14.935408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.977 [2024-11-20 08:27:14.935419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.977 [2024-11-20 08:27:14.935425] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.977 [2024-11-20 08:27:14.935432] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.977 [2024-11-20 08:27:14.946922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.977 [2024-11-20 08:27:14.947347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.977 [2024-11-20 08:27:14.947394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.977 [2024-11-20 08:27:14.947419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.977 [2024-11-20 08:27:14.947855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.977 [2024-11-20 08:27:14.948015] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.977 [2024-11-20 08:27:14.948025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.977 [2024-11-20 08:27:14.948031] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.977 [2024-11-20 08:27:14.948038] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.977 [2024-11-20 08:27:14.959656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.977 [2024-11-20 08:27:14.960071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.977 [2024-11-20 08:27:14.960088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.977 [2024-11-20 08:27:14.960098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.977 [2024-11-20 08:27:14.960280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.977 [2024-11-20 08:27:14.960449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.977 [2024-11-20 08:27:14.960458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.977 [2024-11-20 08:27:14.960465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.977 [2024-11-20 08:27:14.960471] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.977 [2024-11-20 08:27:14.972522] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.978 [2024-11-20 08:27:14.972837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.978 [2024-11-20 08:27:14.972853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.978 [2024-11-20 08:27:14.972860] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.978 [2024-11-20 08:27:14.973019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.978 [2024-11-20 08:27:14.973177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.978 [2024-11-20 08:27:14.973187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.978 [2024-11-20 08:27:14.973193] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.978 [2024-11-20 08:27:14.973199] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:00.978 [2024-11-20 08:27:14.985403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:00.978 [2024-11-20 08:27:14.985687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:00.978 [2024-11-20 08:27:14.985705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:00.978 [2024-11-20 08:27:14.985712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:00.978 [2024-11-20 08:27:14.985879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:00.978 [2024-11-20 08:27:14.986047] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:00.978 [2024-11-20 08:27:14.986057] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:00.978 [2024-11-20 08:27:14.986064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:00.978 [2024-11-20 08:27:14.986071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.238 [2024-11-20 08:27:14.998341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.238 [2024-11-20 08:27:14.998775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.238 [2024-11-20 08:27:14.998793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.238 [2024-11-20 08:27:14.998801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.238 [2024-11-20 08:27:14.998974] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.238 [2024-11-20 08:27:14.999150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.238 [2024-11-20 08:27:14.999160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.238 [2024-11-20 08:27:14.999167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.238 [2024-11-20 08:27:14.999173] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.238 [2024-11-20 08:27:15.011175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.238 [2024-11-20 08:27:15.011588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.238 [2024-11-20 08:27:15.011605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.238 [2024-11-20 08:27:15.011613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.238 [2024-11-20 08:27:15.011771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.238 [2024-11-20 08:27:15.011931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.238 [2024-11-20 08:27:15.011941] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.238 [2024-11-20 08:27:15.011947] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.238 [2024-11-20 08:27:15.011953] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.238 [2024-11-20 08:27:15.024121] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.238 [2024-11-20 08:27:15.024479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.238 [2024-11-20 08:27:15.024525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.238 [2024-11-20 08:27:15.024550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.238 [2024-11-20 08:27:15.025011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.238 [2024-11-20 08:27:15.025172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.238 [2024-11-20 08:27:15.025182] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.238 [2024-11-20 08:27:15.025188] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.238 [2024-11-20 08:27:15.025195] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.238 [2024-11-20 08:27:15.039091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.238 [2024-11-20 08:27:15.039597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.238 [2024-11-20 08:27:15.039620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.238 [2024-11-20 08:27:15.039632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.238 [2024-11-20 08:27:15.039885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.238 [2024-11-20 08:27:15.040140] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.238 [2024-11-20 08:27:15.040153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.238 [2024-11-20 08:27:15.040167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.238 [2024-11-20 08:27:15.040177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.238 [2024-11-20 08:27:15.051953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.238 [2024-11-20 08:27:15.052374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.238 [2024-11-20 08:27:15.052392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.238 [2024-11-20 08:27:15.052400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.238 [2024-11-20 08:27:15.052568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.238 [2024-11-20 08:27:15.052736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.238 [2024-11-20 08:27:15.052746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.238 [2024-11-20 08:27:15.052752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.238 [2024-11-20 08:27:15.052759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.238 [2024-11-20 08:27:15.064973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.238 [2024-11-20 08:27:15.065345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.238 [2024-11-20 08:27:15.065364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.238 [2024-11-20 08:27:15.065372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.238 [2024-11-20 08:27:15.065544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.238 [2024-11-20 08:27:15.065717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.238 [2024-11-20 08:27:15.065727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.238 [2024-11-20 08:27:15.065734] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.238 [2024-11-20 08:27:15.065741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.238 [2024-11-20 08:27:15.078031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.238 [2024-11-20 08:27:15.078447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.238 [2024-11-20 08:27:15.078493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.238 [2024-11-20 08:27:15.078517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.238 [2024-11-20 08:27:15.078784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.238 [2024-11-20 08:27:15.078957] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.238 [2024-11-20 08:27:15.078967] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.238 [2024-11-20 08:27:15.078974] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.238 [2024-11-20 08:27:15.078982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.238 [2024-11-20 08:27:15.091100] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.238 [2024-11-20 08:27:15.091534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.238 [2024-11-20 08:27:15.091552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.238 [2024-11-20 08:27:15.091560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.238 [2024-11-20 08:27:15.091733] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.238 [2024-11-20 08:27:15.091907] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.238 [2024-11-20 08:27:15.091917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.238 [2024-11-20 08:27:15.091923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.238 [2024-11-20 08:27:15.091930] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.238 [2024-11-20 08:27:15.104067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.238 [2024-11-20 08:27:15.104403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.238 [2024-11-20 08:27:15.104422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.238 [2024-11-20 08:27:15.104430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.238 [2024-11-20 08:27:15.104602] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.238 [2024-11-20 08:27:15.104774] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.238 [2024-11-20 08:27:15.104784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.238 [2024-11-20 08:27:15.104791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.238 [2024-11-20 08:27:15.104798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.238 [2024-11-20 08:27:15.117360] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.238 [2024-11-20 08:27:15.117817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.238 [2024-11-20 08:27:15.117835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.238 [2024-11-20 08:27:15.117843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.238 [2024-11-20 08:27:15.118026] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.239 [2024-11-20 08:27:15.118234] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.239 [2024-11-20 08:27:15.118247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.239 [2024-11-20 08:27:15.118255] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.239 [2024-11-20 08:27:15.118263] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.239 [2024-11-20 08:27:15.130373] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.239 [2024-11-20 08:27:15.130802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.239 [2024-11-20 08:27:15.130821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.239 [2024-11-20 08:27:15.130832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.239 [2024-11-20 08:27:15.131005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.239 [2024-11-20 08:27:15.131179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.239 [2024-11-20 08:27:15.131190] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.239 [2024-11-20 08:27:15.131198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.239 [2024-11-20 08:27:15.131210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.239 [2024-11-20 08:27:15.143388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.239 [2024-11-20 08:27:15.143723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.239 [2024-11-20 08:27:15.143741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.239 [2024-11-20 08:27:15.143748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.239 [2024-11-20 08:27:15.143916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.239 [2024-11-20 08:27:15.144084] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.239 [2024-11-20 08:27:15.144094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.239 [2024-11-20 08:27:15.144100] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.239 [2024-11-20 08:27:15.144107] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.239 [2024-11-20 08:27:15.156388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.239 [2024-11-20 08:27:15.156775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.239 [2024-11-20 08:27:15.156793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.239 [2024-11-20 08:27:15.156801] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.239 [2024-11-20 08:27:15.156959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.239 [2024-11-20 08:27:15.157118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.239 [2024-11-20 08:27:15.157127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.239 [2024-11-20 08:27:15.157134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.239 [2024-11-20 08:27:15.157140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.239 [2024-11-20 08:27:15.169319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.239 [2024-11-20 08:27:15.169716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.239 [2024-11-20 08:27:15.169761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.239 [2024-11-20 08:27:15.169784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.239 [2024-11-20 08:27:15.170249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.239 [2024-11-20 08:27:15.170415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.239 [2024-11-20 08:27:15.170424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.239 [2024-11-20 08:27:15.170431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.239 [2024-11-20 08:27:15.170438] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.239 [2024-11-20 08:27:15.182245] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.239 [2024-11-20 08:27:15.182580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.239 [2024-11-20 08:27:15.182597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.239 [2024-11-20 08:27:15.182604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.239 [2024-11-20 08:27:15.182762] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.239 [2024-11-20 08:27:15.182921] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.239 [2024-11-20 08:27:15.182930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.239 [2024-11-20 08:27:15.182936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.239 [2024-11-20 08:27:15.182942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.239 [2024-11-20 08:27:15.195306] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.239 [2024-11-20 08:27:15.195637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.239 [2024-11-20 08:27:15.195655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.239 [2024-11-20 08:27:15.195663] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.239 [2024-11-20 08:27:15.195835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.239 [2024-11-20 08:27:15.196007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.239 [2024-11-20 08:27:15.196018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.239 [2024-11-20 08:27:15.196025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.239 [2024-11-20 08:27:15.196032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.239 [2024-11-20 08:27:15.208206] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.239 [2024-11-20 08:27:15.208643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.239 [2024-11-20 08:27:15.208692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.239 [2024-11-20 08:27:15.208717] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.239 [2024-11-20 08:27:15.209253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.239 [2024-11-20 08:27:15.209423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.239 [2024-11-20 08:27:15.209433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.239 [2024-11-20 08:27:15.209443] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.239 [2024-11-20 08:27:15.209450] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.239 [2024-11-20 08:27:15.221132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.239 [2024-11-20 08:27:15.221408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.239 [2024-11-20 08:27:15.221426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.239 [2024-11-20 08:27:15.221434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.239 [2024-11-20 08:27:15.221592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.239 [2024-11-20 08:27:15.221751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.239 [2024-11-20 08:27:15.221760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.239 [2024-11-20 08:27:15.221766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.239 [2024-11-20 08:27:15.221773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.239 [2024-11-20 08:27:15.234067] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.239 [2024-11-20 08:27:15.234436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.239 [2024-11-20 08:27:15.234483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.239 [2024-11-20 08:27:15.234507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.239 [2024-11-20 08:27:15.235087] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.239 [2024-11-20 08:27:15.235522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.239 [2024-11-20 08:27:15.235533] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.239 [2024-11-20 08:27:15.235539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.239 [2024-11-20 08:27:15.235546] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.239 [2024-11-20 08:27:15.246973] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.239 [2024-11-20 08:27:15.247317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.239 [2024-11-20 08:27:15.247336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.240 [2024-11-20 08:27:15.247344] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.240 [2024-11-20 08:27:15.247511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.240 [2024-11-20 08:27:15.247679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.240 [2024-11-20 08:27:15.247689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.240 [2024-11-20 08:27:15.247696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.240 [2024-11-20 08:27:15.247702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.240 [2024-11-20 08:27:15.259841] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.240 [2024-11-20 08:27:15.260191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.240 [2024-11-20 08:27:15.260215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.240 [2024-11-20 08:27:15.260224] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.500 [2024-11-20 08:27:15.260392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.500 [2024-11-20 08:27:15.260560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.500 [2024-11-20 08:27:15.260572] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.500 [2024-11-20 08:27:15.260579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.500 [2024-11-20 08:27:15.260587] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.500 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1854158 Killed "${NVMF_APP[@]}" "$@" 00:30:01.500 08:27:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:30:01.500 08:27:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:30:01.500 08:27:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:01.500 08:27:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:01.500 08:27:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.500 [2024-11-20 08:27:15.272882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.500 [2024-11-20 08:27:15.273339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.500 [2024-11-20 08:27:15.273358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.500 [2024-11-20 08:27:15.273366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.500 [2024-11-20 08:27:15.273539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.500 [2024-11-20 08:27:15.273712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.500 [2024-11-20 08:27:15.273723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.500 [2024-11-20 08:27:15.273729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.500 [2024-11-20 08:27:15.273736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.500 08:27:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # nvmfpid=1855454 00:30:01.500 08:27:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # waitforlisten 1855454 00:30:01.500 08:27:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:01.500 08:27:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 1855454 ']' 00:30:01.500 08:27:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:01.500 08:27:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:01.500 08:27:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:01.500 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:01.500 08:27:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:01.500 08:27:15 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:01.500 [2024-11-20 08:27:15.285871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.500 [2024-11-20 08:27:15.286323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.500 [2024-11-20 08:27:15.286342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.500 [2024-11-20 08:27:15.286351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.500 [2024-11-20 08:27:15.286523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.500 [2024-11-20 08:27:15.286696] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.500 [2024-11-20 08:27:15.286706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.500 [2024-11-20 08:27:15.286713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.500 [2024-11-20 08:27:15.286720] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.500 [2024-11-20 08:27:15.298857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.500 [2024-11-20 08:27:15.299212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.500 [2024-11-20 08:27:15.299229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.500 [2024-11-20 08:27:15.299238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.500 [2024-11-20 08:27:15.299410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.500 [2024-11-20 08:27:15.299581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.500 [2024-11-20 08:27:15.299590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.500 [2024-11-20 08:27:15.299598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.500 [2024-11-20 08:27:15.299604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.500 [2024-11-20 08:27:15.311914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.500 [2024-11-20 08:27:15.312344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.500 [2024-11-20 08:27:15.312363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.500 [2024-11-20 08:27:15.312371] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.500 [2024-11-20 08:27:15.312544] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.500 [2024-11-20 08:27:15.312718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.500 [2024-11-20 08:27:15.312728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.500 [2024-11-20 08:27:15.312735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.500 [2024-11-20 08:27:15.312741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.500 [2024-11-20 08:27:15.324885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.500 [2024-11-20 08:27:15.325255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.500 [2024-11-20 08:27:15.325280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.500 [2024-11-20 08:27:15.325288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.500 [2024-11-20 08:27:15.325461] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.500 [2024-11-20 08:27:15.325633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.500 [2024-11-20 08:27:15.325643] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.500 [2024-11-20 08:27:15.325649] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.500 [2024-11-20 08:27:15.325656] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.500 [2024-11-20 08:27:15.327333] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:30:01.500 [2024-11-20 08:27:15.327379] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:01.500 [2024-11-20 08:27:15.337851] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.500 [2024-11-20 08:27:15.338244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.500 [2024-11-20 08:27:15.338264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.500 [2024-11-20 08:27:15.338274] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.500 [2024-11-20 08:27:15.338448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.500 [2024-11-20 08:27:15.338622] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.500 [2024-11-20 08:27:15.338633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.500 [2024-11-20 08:27:15.338640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.500 [2024-11-20 08:27:15.338647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.500 [2024-11-20 08:27:15.350957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.500 [2024-11-20 08:27:15.351247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.500 [2024-11-20 08:27:15.351267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.500 [2024-11-20 08:27:15.351275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.500 [2024-11-20 08:27:15.351443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.500 [2024-11-20 08:27:15.351612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.500 [2024-11-20 08:27:15.351622] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.500 [2024-11-20 08:27:15.351629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.500 [2024-11-20 08:27:15.351637] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.500 [2024-11-20 08:27:15.363995] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.500 [2024-11-20 08:27:15.364269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.500 [2024-11-20 08:27:15.364291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.501 [2024-11-20 08:27:15.364299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.501 [2024-11-20 08:27:15.364471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.501 [2024-11-20 08:27:15.364644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.501 [2024-11-20 08:27:15.364654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.501 [2024-11-20 08:27:15.364660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.501 [2024-11-20 08:27:15.364667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.501 [2024-11-20 08:27:15.376957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.501 [2024-11-20 08:27:15.377343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.501 [2024-11-20 08:27:15.377360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.501 [2024-11-20 08:27:15.377367] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.501 [2024-11-20 08:27:15.377539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.501 [2024-11-20 08:27:15.377713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.501 [2024-11-20 08:27:15.377723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.501 [2024-11-20 08:27:15.377730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.501 [2024-11-20 08:27:15.377737] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.501 [2024-11-20 08:27:15.390026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.501 [2024-11-20 08:27:15.390334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.501 [2024-11-20 08:27:15.390352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.501 [2024-11-20 08:27:15.390360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.501 [2024-11-20 08:27:15.390534] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.501 [2024-11-20 08:27:15.390706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.501 [2024-11-20 08:27:15.390716] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.501 [2024-11-20 08:27:15.390723] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.501 [2024-11-20 08:27:15.390730] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.501 [2024-11-20 08:27:15.403027] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.501 [2024-11-20 08:27:15.403405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.501 [2024-11-20 08:27:15.403424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.501 [2024-11-20 08:27:15.403433] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.501 [2024-11-20 08:27:15.403610] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.501 [2024-11-20 08:27:15.403784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.501 [2024-11-20 08:27:15.403794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.501 [2024-11-20 08:27:15.403801] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.501 [2024-11-20 08:27:15.403808] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.501 [2024-11-20 08:27:15.411105] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:01.501 [2024-11-20 08:27:15.416110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.501 [2024-11-20 08:27:15.416409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.501 [2024-11-20 08:27:15.416428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.501 [2024-11-20 08:27:15.416437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.501 [2024-11-20 08:27:15.416609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.501 [2024-11-20 08:27:15.416782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.501 [2024-11-20 08:27:15.416792] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.501 [2024-11-20 08:27:15.416798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.501 [2024-11-20 08:27:15.416806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.501 [2024-11-20 08:27:15.429104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.501 [2024-11-20 08:27:15.429402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.501 [2024-11-20 08:27:15.429422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.501 [2024-11-20 08:27:15.429430] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.501 [2024-11-20 08:27:15.429603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.501 [2024-11-20 08:27:15.429775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.501 [2024-11-20 08:27:15.429785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.501 [2024-11-20 08:27:15.429792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.501 [2024-11-20 08:27:15.429798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.501 [2024-11-20 08:27:15.442104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.501 [2024-11-20 08:27:15.442465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.501 [2024-11-20 08:27:15.442483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.501 [2024-11-20 08:27:15.442491] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.501 [2024-11-20 08:27:15.442664] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.501 [2024-11-20 08:27:15.442842] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.501 [2024-11-20 08:27:15.442853] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.501 [2024-11-20 08:27:15.442860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.501 [2024-11-20 08:27:15.442867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.501 [2024-11-20 08:27:15.455165] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.501 [2024-11-20 08:27:15.455496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:01.501 [2024-11-20 08:27:15.455524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:01.501 [2024-11-20 08:27:15.455532] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:01.501 [2024-11-20 08:27:15.455538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:01.501 [2024-11-20 08:27:15.455543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:01.501 [2024-11-20 08:27:15.455585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.501 [2024-11-20 08:27:15.455602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.501 [2024-11-20 08:27:15.455610] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.501 [2024-11-20 08:27:15.455783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.501 [2024-11-20 08:27:15.455955] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.501 [2024-11-20 08:27:15.455965] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.501 [2024-11-20 08:27:15.455972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.501 [2024-11-20 08:27:15.455979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.501 [2024-11-20 08:27:15.460222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:01.501 [2024-11-20 08:27:15.460258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:01.501 [2024-11-20 08:27:15.460258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:01.501 [2024-11-20 08:27:15.468125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.501 [2024-11-20 08:27:15.468509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.501 [2024-11-20 08:27:15.468528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.501 [2024-11-20 08:27:15.468537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.501 [2024-11-20 08:27:15.468711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.501 [2024-11-20 08:27:15.468888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.501 [2024-11-20 08:27:15.468898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.501 [2024-11-20 08:27:15.468905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.501 [2024-11-20 08:27:15.468912] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.501 [2024-11-20 08:27:15.481225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.501 [2024-11-20 08:27:15.481537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.501 [2024-11-20 08:27:15.481557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.501 [2024-11-20 08:27:15.481566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.502 [2024-11-20 08:27:15.481739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.502 [2024-11-20 08:27:15.481913] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.502 [2024-11-20 08:27:15.481923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.502 [2024-11-20 08:27:15.481931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.502 [2024-11-20 08:27:15.481939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.502 [2024-11-20 08:27:15.494246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.502 [2024-11-20 08:27:15.494618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.502 [2024-11-20 08:27:15.494639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.502 [2024-11-20 08:27:15.494647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.502 [2024-11-20 08:27:15.494821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.502 [2024-11-20 08:27:15.494995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.502 [2024-11-20 08:27:15.495005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.502 [2024-11-20 08:27:15.495012] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.502 [2024-11-20 08:27:15.495020] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.502 [2024-11-20 08:27:15.507324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.502 [2024-11-20 08:27:15.507698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.502 [2024-11-20 08:27:15.507717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.502 [2024-11-20 08:27:15.507726] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.502 [2024-11-20 08:27:15.507900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.502 [2024-11-20 08:27:15.508073] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.502 [2024-11-20 08:27:15.508084] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.502 [2024-11-20 08:27:15.508092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.502 [2024-11-20 08:27:15.508100] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.502 [2024-11-20 08:27:15.520395] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.502 [2024-11-20 08:27:15.520816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.502 [2024-11-20 08:27:15.520835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.502 [2024-11-20 08:27:15.520844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.502 [2024-11-20 08:27:15.521022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.502 [2024-11-20 08:27:15.521198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.502 [2024-11-20 08:27:15.521214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.502 [2024-11-20 08:27:15.521222] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.502 [2024-11-20 08:27:15.521229] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.761 [2024-11-20 08:27:15.533365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.761 [2024-11-20 08:27:15.533802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.761 [2024-11-20 08:27:15.533821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.761 [2024-11-20 08:27:15.533829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.761 [2024-11-20 08:27:15.534003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.761 [2024-11-20 08:27:15.534177] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.761 [2024-11-20 08:27:15.534187] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.761 [2024-11-20 08:27:15.534195] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.761 [2024-11-20 08:27:15.534208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.761 [2024-11-20 08:27:15.546519] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.761 [2024-11-20 08:27:15.546935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.761 [2024-11-20 08:27:15.546955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.761 [2024-11-20 08:27:15.546963] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.761 [2024-11-20 08:27:15.547135] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.761 [2024-11-20 08:27:15.547313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.761 [2024-11-20 08:27:15.547324] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.761 [2024-11-20 08:27:15.547330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.761 [2024-11-20 08:27:15.547338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.761 [2024-11-20 08:27:15.559617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.762 [2024-11-20 08:27:15.559941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.762 [2024-11-20 08:27:15.559960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.762 [2024-11-20 08:27:15.559968] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.762 [2024-11-20 08:27:15.560141] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.762 [2024-11-20 08:27:15.560318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.762 [2024-11-20 08:27:15.560332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.762 [2024-11-20 08:27:15.560339] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.762 [2024-11-20 08:27:15.560346] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.762 [2024-11-20 08:27:15.572630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.762 [2024-11-20 08:27:15.573066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.762 [2024-11-20 08:27:15.573084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.762 [2024-11-20 08:27:15.573092] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.762 [2024-11-20 08:27:15.573270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.762 [2024-11-20 08:27:15.573443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.762 [2024-11-20 08:27:15.573453] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.762 [2024-11-20 08:27:15.573459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.762 [2024-11-20 08:27:15.573467] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.762 [2024-11-20 08:27:15.585580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.762 [2024-11-20 08:27:15.586003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.762 [2024-11-20 08:27:15.586021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.762 [2024-11-20 08:27:15.586029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.762 [2024-11-20 08:27:15.586207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.762 [2024-11-20 08:27:15.586381] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.762 [2024-11-20 08:27:15.586390] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.762 [2024-11-20 08:27:15.586397] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.762 [2024-11-20 08:27:15.586403] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.762 [2024-11-20 08:27:15.598676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.762 [2024-11-20 08:27:15.599079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.762 [2024-11-20 08:27:15.599097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.762 [2024-11-20 08:27:15.599105] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.762 [2024-11-20 08:27:15.599281] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.762 [2024-11-20 08:27:15.599454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.762 [2024-11-20 08:27:15.599465] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.762 [2024-11-20 08:27:15.599471] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.762 [2024-11-20 08:27:15.599482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.762 [2024-11-20 08:27:15.611773] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.762 [2024-11-20 08:27:15.612209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.762 [2024-11-20 08:27:15.612227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.762 [2024-11-20 08:27:15.612235] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.762 [2024-11-20 08:27:15.612407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.762 [2024-11-20 08:27:15.612581] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.762 [2024-11-20 08:27:15.612591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.762 [2024-11-20 08:27:15.612598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.762 [2024-11-20 08:27:15.612604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.762 [2024-11-20 08:27:15.624881] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.762 [2024-11-20 08:27:15.625310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.762 [2024-11-20 08:27:15.625329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.762 [2024-11-20 08:27:15.625337] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.762 [2024-11-20 08:27:15.625509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.762 [2024-11-20 08:27:15.625683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.762 [2024-11-20 08:27:15.625693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.762 [2024-11-20 08:27:15.625700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.762 [2024-11-20 08:27:15.625708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.762 5092.17 IOPS, 19.89 MiB/s [2024-11-20T07:27:15.790Z] [2024-11-20 08:27:15.637960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.762 [2024-11-20 08:27:15.638368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.762 [2024-11-20 08:27:15.638387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.762 [2024-11-20 08:27:15.638395] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.762 [2024-11-20 08:27:15.638568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.762 [2024-11-20 08:27:15.638741] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.762 [2024-11-20 08:27:15.638751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.762 [2024-11-20 08:27:15.638757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.762 [2024-11-20 08:27:15.638764] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.762 [2024-11-20 08:27:15.651053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.762 [2024-11-20 08:27:15.651418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.762 [2024-11-20 08:27:15.651436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.762 [2024-11-20 08:27:15.651444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.762 [2024-11-20 08:27:15.651617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.762 [2024-11-20 08:27:15.651790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.762 [2024-11-20 08:27:15.651801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.762 [2024-11-20 08:27:15.651809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.762 [2024-11-20 08:27:15.651816] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.762 [2024-11-20 08:27:15.664110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.762 [2024-11-20 08:27:15.664523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.762 [2024-11-20 08:27:15.664541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.762 [2024-11-20 08:27:15.664550] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.762 [2024-11-20 08:27:15.664722] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.762 [2024-11-20 08:27:15.664895] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.762 [2024-11-20 08:27:15.664905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.762 [2024-11-20 08:27:15.664912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.762 [2024-11-20 08:27:15.664919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.762 [2024-11-20 08:27:15.677197] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.762 [2024-11-20 08:27:15.677606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.762 [2024-11-20 08:27:15.677624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.762 [2024-11-20 08:27:15.677632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.762 [2024-11-20 08:27:15.677806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.762 [2024-11-20 08:27:15.677978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.762 [2024-11-20 08:27:15.677988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.762 [2024-11-20 08:27:15.677995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.762 [2024-11-20 08:27:15.678002] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.763 [2024-11-20 08:27:15.690280] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.763 [2024-11-20 08:27:15.690601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.763 [2024-11-20 08:27:15.690619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.763 [2024-11-20 08:27:15.690627] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.763 [2024-11-20 08:27:15.690804] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.763 [2024-11-20 08:27:15.690979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.763 [2024-11-20 08:27:15.690989] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.763 [2024-11-20 08:27:15.690996] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.763 [2024-11-20 08:27:15.691004] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.763 [2024-11-20 08:27:15.703298] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.763 [2024-11-20 08:27:15.703654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.763 [2024-11-20 08:27:15.703671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.763 [2024-11-20 08:27:15.703679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.763 [2024-11-20 08:27:15.703851] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.763 [2024-11-20 08:27:15.704023] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.763 [2024-11-20 08:27:15.704033] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.763 [2024-11-20 08:27:15.704039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.763 [2024-11-20 08:27:15.704046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.763 [2024-11-20 08:27:15.716344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.763 [2024-11-20 08:27:15.716756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.763 [2024-11-20 08:27:15.716773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.763 [2024-11-20 08:27:15.716781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.763 [2024-11-20 08:27:15.716954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.763 [2024-11-20 08:27:15.717128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.763 [2024-11-20 08:27:15.717138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.763 [2024-11-20 08:27:15.717145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.763 [2024-11-20 08:27:15.717152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.763 [2024-11-20 08:27:15.729314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.763 [2024-11-20 08:27:15.729746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.763 [2024-11-20 08:27:15.729764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.763 [2024-11-20 08:27:15.729772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.763 [2024-11-20 08:27:15.729945] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.763 [2024-11-20 08:27:15.730118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.763 [2024-11-20 08:27:15.730132] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.763 [2024-11-20 08:27:15.730138] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.763 [2024-11-20 08:27:15.730146] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.763 [2024-11-20 08:27:15.742275] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.763 [2024-11-20 08:27:15.742725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.763 [2024-11-20 08:27:15.742743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.763 [2024-11-20 08:27:15.742751] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.763 [2024-11-20 08:27:15.742923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.763 [2024-11-20 08:27:15.743098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.763 [2024-11-20 08:27:15.743108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.763 [2024-11-20 08:27:15.743115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.763 [2024-11-20 08:27:15.743122] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.763 [2024-11-20 08:27:15.755243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.763 [2024-11-20 08:27:15.755650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.763 [2024-11-20 08:27:15.755667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.763 [2024-11-20 08:27:15.755675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.763 [2024-11-20 08:27:15.755847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.763 [2024-11-20 08:27:15.756020] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.763 [2024-11-20 08:27:15.756030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.763 [2024-11-20 08:27:15.756037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.763 [2024-11-20 08:27:15.756043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.763 [2024-11-20 08:27:15.768323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.763 [2024-11-20 08:27:15.768750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.763 [2024-11-20 08:27:15.768768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.763 [2024-11-20 08:27:15.768776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.763 [2024-11-20 08:27:15.768949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.763 [2024-11-20 08:27:15.769124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.763 [2024-11-20 08:27:15.769133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.763 [2024-11-20 08:27:15.769140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.763 [2024-11-20 08:27:15.769150] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:01.763 [2024-11-20 08:27:15.781274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:01.763 [2024-11-20 08:27:15.781606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:01.763 [2024-11-20 08:27:15.781623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:01.763 [2024-11-20 08:27:15.781631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:01.763 [2024-11-20 08:27:15.781803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:01.763 [2024-11-20 08:27:15.781976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:01.763 [2024-11-20 08:27:15.781986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:01.763 [2024-11-20 08:27:15.781992] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:01.763 [2024-11-20 08:27:15.781999] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.024 [2024-11-20 08:27:15.794291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.024 [2024-11-20 08:27:15.794716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-11-20 08:27:15.794734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.024 [2024-11-20 08:27:15.794742] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.024 [2024-11-20 08:27:15.794915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.024 [2024-11-20 08:27:15.795090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.024 [2024-11-20 08:27:15.795100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.024 [2024-11-20 08:27:15.795107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.024 [2024-11-20 08:27:15.795113] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.024 [2024-11-20 08:27:15.807401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.024 [2024-11-20 08:27:15.807831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-11-20 08:27:15.807848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.024 [2024-11-20 08:27:15.807856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.024 [2024-11-20 08:27:15.808028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.024 [2024-11-20 08:27:15.808206] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.024 [2024-11-20 08:27:15.808216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.024 [2024-11-20 08:27:15.808224] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.024 [2024-11-20 08:27:15.808231] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.024 [2024-11-20 08:27:15.820508] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.024 [2024-11-20 08:27:15.820869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-11-20 08:27:15.820886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.024 [2024-11-20 08:27:15.820895] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.024 [2024-11-20 08:27:15.821067] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.024 [2024-11-20 08:27:15.821245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.024 [2024-11-20 08:27:15.821255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.024 [2024-11-20 08:27:15.821262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.024 [2024-11-20 08:27:15.821270] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.024 [2024-11-20 08:27:15.833550] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.024 [2024-11-20 08:27:15.833907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-11-20 08:27:15.833925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.024 [2024-11-20 08:27:15.833932] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.024 [2024-11-20 08:27:15.834104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.024 [2024-11-20 08:27:15.834281] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.024 [2024-11-20 08:27:15.834292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.024 [2024-11-20 08:27:15.834299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.024 [2024-11-20 08:27:15.834306] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.024 [2024-11-20 08:27:15.846583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.024 [2024-11-20 08:27:15.846951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-11-20 08:27:15.846968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.024 [2024-11-20 08:27:15.846976] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.024 [2024-11-20 08:27:15.847149] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.024 [2024-11-20 08:27:15.847328] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.024 [2024-11-20 08:27:15.847338] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.024 [2024-11-20 08:27:15.847345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.024 [2024-11-20 08:27:15.847353] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.024 [2024-11-20 08:27:15.859631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.024 [2024-11-20 08:27:15.859973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-11-20 08:27:15.859991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.024 [2024-11-20 08:27:15.859999] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.024 [2024-11-20 08:27:15.860174] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.024 [2024-11-20 08:27:15.860352] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.024 [2024-11-20 08:27:15.860364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.024 [2024-11-20 08:27:15.860370] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.024 [2024-11-20 08:27:15.860377] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.024 [2024-11-20 08:27:15.872656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.024 [2024-11-20 08:27:15.873086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-11-20 08:27:15.873103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.024 [2024-11-20 08:27:15.873111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.024 [2024-11-20 08:27:15.873287] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.024 [2024-11-20 08:27:15.873461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.024 [2024-11-20 08:27:15.873471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.024 [2024-11-20 08:27:15.873478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.024 [2024-11-20 08:27:15.873485] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.024 [2024-11-20 08:27:15.885754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.024 [2024-11-20 08:27:15.886160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.024 [2024-11-20 08:27:15.886178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.024 [2024-11-20 08:27:15.886186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.024 [2024-11-20 08:27:15.886363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.024 [2024-11-20 08:27:15.886536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.024 [2024-11-20 08:27:15.886545] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.024 [2024-11-20 08:27:15.886552] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.024 [2024-11-20 08:27:15.886558] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.025 [2024-11-20 08:27:15.898834] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.025 [2024-11-20 08:27:15.899190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-11-20 08:27:15.899211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.025 [2024-11-20 08:27:15.899220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.025 [2024-11-20 08:27:15.899393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.025 [2024-11-20 08:27:15.899565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.025 [2024-11-20 08:27:15.899578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.025 [2024-11-20 08:27:15.899585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.025 [2024-11-20 08:27:15.899593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.025 [2024-11-20 08:27:15.911883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.025 [2024-11-20 08:27:15.912303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-11-20 08:27:15.912321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.025 [2024-11-20 08:27:15.912329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.025 [2024-11-20 08:27:15.912502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.025 [2024-11-20 08:27:15.912675] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.025 [2024-11-20 08:27:15.912684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.025 [2024-11-20 08:27:15.912691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.025 [2024-11-20 08:27:15.912698] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.025 [2024-11-20 08:27:15.924979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.025 [2024-11-20 08:27:15.925393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-11-20 08:27:15.925411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.025 [2024-11-20 08:27:15.925419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.025 [2024-11-20 08:27:15.925591] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.025 [2024-11-20 08:27:15.925764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.025 [2024-11-20 08:27:15.925774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.025 [2024-11-20 08:27:15.925781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.025 [2024-11-20 08:27:15.925787] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.025 [2024-11-20 08:27:15.938066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.025 [2024-11-20 08:27:15.938496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-11-20 08:27:15.938515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.025 [2024-11-20 08:27:15.938523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.025 [2024-11-20 08:27:15.938695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.025 [2024-11-20 08:27:15.938867] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.025 [2024-11-20 08:27:15.938877] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.025 [2024-11-20 08:27:15.938884] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.025 [2024-11-20 08:27:15.938898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.025 [2024-11-20 08:27:15.951011] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.025 [2024-11-20 08:27:15.951442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-11-20 08:27:15.951460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.025 [2024-11-20 08:27:15.951469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.025 [2024-11-20 08:27:15.951642] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.025 [2024-11-20 08:27:15.951814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.025 [2024-11-20 08:27:15.951824] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.025 [2024-11-20 08:27:15.951831] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.025 [2024-11-20 08:27:15.951838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.025 [2024-11-20 08:27:15.964108] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.025 [2024-11-20 08:27:15.964408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-11-20 08:27:15.964426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.025 [2024-11-20 08:27:15.964435] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.025 [2024-11-20 08:27:15.964608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.025 [2024-11-20 08:27:15.964781] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.025 [2024-11-20 08:27:15.964791] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.025 [2024-11-20 08:27:15.964797] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.025 [2024-11-20 08:27:15.964804] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.025 [2024-11-20 08:27:15.977089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.025 [2024-11-20 08:27:15.977523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-11-20 08:27:15.977540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.025 [2024-11-20 08:27:15.977548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.025 [2024-11-20 08:27:15.977721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.025 [2024-11-20 08:27:15.977893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.025 [2024-11-20 08:27:15.977903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.025 [2024-11-20 08:27:15.977910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.025 [2024-11-20 08:27:15.977916] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.025 [2024-11-20 08:27:15.990041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.025 [2024-11-20 08:27:15.990477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-11-20 08:27:15.990495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.025 [2024-11-20 08:27:15.990502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.025 [2024-11-20 08:27:15.990674] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.025 [2024-11-20 08:27:15.990847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.025 [2024-11-20 08:27:15.990857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.025 [2024-11-20 08:27:15.990865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.025 [2024-11-20 08:27:15.990872] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.025 [2024-11-20 08:27:16.002993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.025 [2024-11-20 08:27:16.003361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-11-20 08:27:16.003379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.025 [2024-11-20 08:27:16.003387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.025 [2024-11-20 08:27:16.003558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.025 [2024-11-20 08:27:16.003732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.025 [2024-11-20 08:27:16.003742] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.025 [2024-11-20 08:27:16.003749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.025 [2024-11-20 08:27:16.003756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.025 [2024-11-20 08:27:16.016056] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.025 [2024-11-20 08:27:16.016395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.025 [2024-11-20 08:27:16.016414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.025 [2024-11-20 08:27:16.016422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.025 [2024-11-20 08:27:16.016594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.025 [2024-11-20 08:27:16.016767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.025 [2024-11-20 08:27:16.016778] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.026 [2024-11-20 08:27:16.016784] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.026 [2024-11-20 08:27:16.016791] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.026 [2024-11-20 08:27:16.029087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.026 [2024-11-20 08:27:16.029441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-11-20 08:27:16.029459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.026 [2024-11-20 08:27:16.029467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.026 [2024-11-20 08:27:16.029643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.026 [2024-11-20 08:27:16.029815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.026 [2024-11-20 08:27:16.029825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.026 [2024-11-20 08:27:16.029832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.026 [2024-11-20 08:27:16.029839] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.026 [2024-11-20 08:27:16.042127] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.026 [2024-11-20 08:27:16.042553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.026 [2024-11-20 08:27:16.042571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.026 [2024-11-20 08:27:16.042580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.026 [2024-11-20 08:27:16.042751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.026 [2024-11-20 08:27:16.042925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.026 [2024-11-20 08:27:16.042935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.026 [2024-11-20 08:27:16.042942] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.026 [2024-11-20 08:27:16.042948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.287 [2024-11-20 08:27:16.055076] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.287 [2024-11-20 08:27:16.055406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.287 [2024-11-20 08:27:16.055423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.287 [2024-11-20 08:27:16.055432] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.287 [2024-11-20 08:27:16.055604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.287 [2024-11-20 08:27:16.055776] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.287 [2024-11-20 08:27:16.055786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.287 [2024-11-20 08:27:16.055793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.287 [2024-11-20 08:27:16.055799] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.287 [2024-11-20 08:27:16.068084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.287 [2024-11-20 08:27:16.068522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.287 [2024-11-20 08:27:16.068540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.287 [2024-11-20 08:27:16.068548] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.287 [2024-11-20 08:27:16.068720] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.287 [2024-11-20 08:27:16.068894] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.287 [2024-11-20 08:27:16.068907] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.287 [2024-11-20 08:27:16.068914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.287 [2024-11-20 08:27:16.068922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.287 [2024-11-20 08:27:16.081046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.287 [2024-11-20 08:27:16.081403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.287 [2024-11-20 08:27:16.081421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.287 [2024-11-20 08:27:16.081428] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.287 [2024-11-20 08:27:16.081601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.287 [2024-11-20 08:27:16.081773] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.287 [2024-11-20 08:27:16.081784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.287 [2024-11-20 08:27:16.081791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.287 [2024-11-20 08:27:16.081798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.287 [2024-11-20 08:27:16.094084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.287 [2024-11-20 08:27:16.094515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.287 [2024-11-20 08:27:16.094534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.287 [2024-11-20 08:27:16.094542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.287 [2024-11-20 08:27:16.094714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.287 [2024-11-20 08:27:16.094888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.287 [2024-11-20 08:27:16.094898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.287 [2024-11-20 08:27:16.094906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.287 [2024-11-20 08:27:16.094914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.287 [2024-11-20 08:27:16.107185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.287 [2024-11-20 08:27:16.107620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.287 [2024-11-20 08:27:16.107638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.287 [2024-11-20 08:27:16.107647] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.287 [2024-11-20 08:27:16.107821] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.287 [2024-11-20 08:27:16.107995] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.287 [2024-11-20 08:27:16.108005] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.287 [2024-11-20 08:27:16.108013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.287 [2024-11-20 08:27:16.108028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.287 [2024-11-20 08:27:16.120160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.287 [2024-11-20 08:27:16.120579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.287 [2024-11-20 08:27:16.120597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.287 [2024-11-20 08:27:16.120605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.287 [2024-11-20 08:27:16.120777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.287 [2024-11-20 08:27:16.120951] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.287 [2024-11-20 08:27:16.120961] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.287 [2024-11-20 08:27:16.120968] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.287 [2024-11-20 08:27:16.120975] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.287 [2024-11-20 08:27:16.133104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.287 [2024-11-20 08:27:16.133542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.287 [2024-11-20 08:27:16.133562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.287 [2024-11-20 08:27:16.133570] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.287 [2024-11-20 08:27:16.133743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.287 [2024-11-20 08:27:16.133916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.287 [2024-11-20 08:27:16.133926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.287 [2024-11-20 08:27:16.133934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.287 [2024-11-20 08:27:16.133942] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.287 [2024-11-20 08:27:16.146063] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.287 [2024-11-20 08:27:16.146496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.287 [2024-11-20 08:27:16.146514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.287 [2024-11-20 08:27:16.146522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.287 [2024-11-20 08:27:16.146694] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.287 [2024-11-20 08:27:16.146866] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.287 [2024-11-20 08:27:16.146876] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.287 [2024-11-20 08:27:16.146883] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.287 [2024-11-20 08:27:16.146889] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.287 [2024-11-20 08:27:16.159010] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.287 [2024-11-20 08:27:16.159391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.287 [2024-11-20 08:27:16.159409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.288 [2024-11-20 08:27:16.159416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.288 [2024-11-20 08:27:16.159589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.288 [2024-11-20 08:27:16.159762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.288 [2024-11-20 08:27:16.159772] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.288 [2024-11-20 08:27:16.159779] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.288 [2024-11-20 08:27:16.159785] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.288 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:02.288 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:30:02.288 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:02.288 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:02.288 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:02.288 [2024-11-20 08:27:16.172071] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.288 [2024-11-20 08:27:16.172388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.288 [2024-11-20 08:27:16.172406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.288 [2024-11-20 08:27:16.172413] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.288 [2024-11-20 08:27:16.172586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.288 [2024-11-20 08:27:16.172759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.288 [2024-11-20 08:27:16.172768] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.288 [2024-11-20 08:27:16.172774] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.288 [2024-11-20 08:27:16.172781] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.288 [2024-11-20 08:27:16.185070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.288 [2024-11-20 08:27:16.185482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.288 [2024-11-20 08:27:16.185499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.288 [2024-11-20 08:27:16.185506] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.288 [2024-11-20 08:27:16.185679] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.288 [2024-11-20 08:27:16.185851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.288 [2024-11-20 08:27:16.185860] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.288 [2024-11-20 08:27:16.185867] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.288 [2024-11-20 08:27:16.185873] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.288 [2024-11-20 08:27:16.198160] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.288 [2024-11-20 08:27:16.198548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.288 [2024-11-20 08:27:16.198565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.288 [2024-11-20 08:27:16.198573] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.288 [2024-11-20 08:27:16.198745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.288 [2024-11-20 08:27:16.198918] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.288 [2024-11-20 08:27:16.198927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.288 [2024-11-20 08:27:16.198934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.288 [2024-11-20 08:27:16.198940] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.288 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:02.288 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:02.288 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.288 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:02.288 [2024-11-20 08:27:16.211244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.288 [2024-11-20 08:27:16.211530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.288 [2024-11-20 08:27:16.211546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.288 [2024-11-20 08:27:16.211554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.288 [2024-11-20 08:27:16.211726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.288 [2024-11-20 08:27:16.211901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.288 [2024-11-20 08:27:16.211910] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.288 [2024-11-20 08:27:16.211917] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.288 [2024-11-20 08:27:16.211923] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.288 [2024-11-20 08:27:16.213565] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:02.288 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.288 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:02.288 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.288 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:02.288 [2024-11-20 08:27:16.224218] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.288 [2024-11-20 08:27:16.224567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.288 [2024-11-20 08:27:16.224584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.288 [2024-11-20 08:27:16.224591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.288 [2024-11-20 08:27:16.224763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.288 [2024-11-20 08:27:16.224939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.288 [2024-11-20 08:27:16.224947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.288 [2024-11-20 08:27:16.224954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.288 [2024-11-20 08:27:16.224960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.288 [2024-11-20 08:27:16.237253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.288 [2024-11-20 08:27:16.237684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.288 [2024-11-20 08:27:16.237700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.288 [2024-11-20 08:27:16.237708] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.288 [2024-11-20 08:27:16.237880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.288 [2024-11-20 08:27:16.238052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.288 [2024-11-20 08:27:16.238061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.288 [2024-11-20 08:27:16.238067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.288 [2024-11-20 08:27:16.238074] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.288 [2024-11-20 08:27:16.250205] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.288 [2024-11-20 08:27:16.250611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.288 [2024-11-20 08:27:16.250628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.288 [2024-11-20 08:27:16.250636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.288 [2024-11-20 08:27:16.250809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.288 [2024-11-20 08:27:16.250980] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.288 [2024-11-20 08:27:16.250988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.288 [2024-11-20 08:27:16.250995] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.288 [2024-11-20 08:27:16.251001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.288 Malloc0 00:30:02.288 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.288 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:02.288 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.288 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:02.288 [2024-11-20 08:27:16.263288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.288 [2024-11-20 08:27:16.263719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.288 [2024-11-20 08:27:16.263735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.288 [2024-11-20 08:27:16.263743] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.289 [2024-11-20 08:27:16.263920] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.289 [2024-11-20 08:27:16.264093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.289 [2024-11-20 08:27:16.264101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.289 [2024-11-20 08:27:16.264107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.289 [2024-11-20 08:27:16.264114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.289 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.289 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:02.289 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.289 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:02.289 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.289 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:02.289 [2024-11-20 08:27:16.276241] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.289 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:02.289 [2024-11-20 08:27:16.276640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:02.289 [2024-11-20 08:27:16.276657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x220e500 with addr=10.0.0.2, port=4420 00:30:02.289 [2024-11-20 08:27:16.276664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x220e500 is same with the state(6) to be set 00:30:02.289 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:02.289 [2024-11-20 08:27:16.276835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x220e500 (9): Bad file descriptor 00:30:02.289 [2024-11-20 08:27:16.277007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:30:02.289 [2024-11-20 08:27:16.277016] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:30:02.289 [2024-11-20 08:27:16.277022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:30:02.289 [2024-11-20 08:27:16.277028] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:30:02.289 [2024-11-20 08:27:16.279140] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:02.289 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:02.289 08:27:16 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1854428 00:30:02.289 [2024-11-20 08:27:16.289305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:30:02.548 [2024-11-20 08:27:16.366176] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:30:03.743 4791.86 IOPS, 18.72 MiB/s [2024-11-20T07:27:18.707Z] 5618.38 IOPS, 21.95 MiB/s [2024-11-20T07:27:20.082Z] 6268.00 IOPS, 24.48 MiB/s [2024-11-20T07:27:21.018Z] 6773.70 IOPS, 26.46 MiB/s [2024-11-20T07:27:21.954Z] 7221.91 IOPS, 28.21 MiB/s [2024-11-20T07:27:22.891Z] 7571.67 IOPS, 29.58 MiB/s [2024-11-20T07:27:23.828Z] 7871.23 IOPS, 30.75 MiB/s [2024-11-20T07:27:24.763Z] 8123.43 IOPS, 31.73 MiB/s [2024-11-20T07:27:24.763Z] 8349.47 IOPS, 32.62 MiB/s 00:30:10.735 Latency(us) 00:30:10.735 [2024-11-20T07:27:24.763Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:10.735 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:30:10.735 Verification LBA range: start 0x0 length 0x4000 00:30:10.735 Nvme1n1 : 15.01 8353.30 32.63 13233.59 0.00 5910.45 427.15 20097.71 00:30:10.735 [2024-11-20T07:27:24.763Z] =================================================================================================================== 00:30:10.735 [2024-11-20T07:27:24.763Z] Total : 8353.30 32.63 13233.59 0.00 5910.45 427.15 20097.71 00:30:10.993 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:30:10.993 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:10.993 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:10.993 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:10.993 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:10.993 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:30:10.993 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:30:10.993 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:10.993 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@99 -- # sync 00:30:10.994 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:10.994 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # set +e 00:30:10.994 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:10.994 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:10.994 rmmod nvme_tcp 00:30:10.994 rmmod nvme_fabrics 00:30:10.994 rmmod nvme_keyring 00:30:10.994 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:10.994 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # set -e 00:30:10.994 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # return 0 00:30:10.994 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # '[' -n 1855454 ']' 00:30:10.994 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@337 -- # killprocess 1855454 00:30:10.994 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 1855454 ']' 00:30:10.994 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 1855454 00:30:10.994 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:30:10.994 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:10.994 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1855454 00:30:10.994 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:10.994 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:10.994 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1855454' 00:30:10.994 killing process with pid 1855454 00:30:10.994 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 1855454 00:30:10.994 08:27:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 1855454 00:30:11.252 08:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:11.252 08:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # nvmf_fini 00:30:11.252 08:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@254 -- # local dev 00:30:11.252 08:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@257 -- # remove_target_ns 00:30:11.252 08:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:11.252 08:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:11.252 08:27:25 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@258 -- # delete_main_bridge 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@121 -- # return 0 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@41 -- # _dev=0 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@41 -- # dev_map=() 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@274 -- # iptr 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@548 -- # iptables-save 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@548 -- # iptables-restore 00:30:13.789 00:30:13.789 real 0m26.154s 00:30:13.789 user 1m0.613s 00:30:13.789 sys 0m6.921s 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:30:13.789 ************************************ 00:30:13.789 END TEST nvmf_bdevperf 00:30:13.789 ************************************ 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:13.789 ************************************ 00:30:13.789 START TEST nvmf_target_disconnect 00:30:13.789 ************************************ 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:30:13.789 * Looking for test storage... 00:30:13.789 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:30:13.789 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:13.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.790 --rc genhtml_branch_coverage=1 00:30:13.790 --rc genhtml_function_coverage=1 00:30:13.790 --rc genhtml_legend=1 00:30:13.790 --rc geninfo_all_blocks=1 00:30:13.790 --rc geninfo_unexecuted_blocks=1 00:30:13.790 00:30:13.790 ' 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:13.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.790 --rc genhtml_branch_coverage=1 00:30:13.790 --rc genhtml_function_coverage=1 00:30:13.790 --rc genhtml_legend=1 00:30:13.790 --rc geninfo_all_blocks=1 00:30:13.790 --rc geninfo_unexecuted_blocks=1 00:30:13.790 00:30:13.790 ' 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:13.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.790 --rc genhtml_branch_coverage=1 00:30:13.790 --rc genhtml_function_coverage=1 00:30:13.790 --rc genhtml_legend=1 00:30:13.790 --rc geninfo_all_blocks=1 00:30:13.790 --rc geninfo_unexecuted_blocks=1 00:30:13.790 00:30:13.790 ' 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:13.790 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:13.790 --rc genhtml_branch_coverage=1 00:30:13.790 --rc genhtml_function_coverage=1 00:30:13.790 --rc genhtml_legend=1 00:30:13.790 --rc geninfo_all_blocks=1 00:30:13.790 --rc geninfo_unexecuted_blocks=1 00:30:13.790 00:30:13.790 ' 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@50 -- # : 0 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:30:13.790 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # remove_target_ns 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # xtrace_disable 00:30:13.790 08:27:27 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@131 -- # pci_devs=() 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@135 -- # net_devs=() 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@136 -- # e810=() 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@136 -- # local -ga e810 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@137 -- # x722=() 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@137 -- # local -ga x722 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@138 -- # mlx=() 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@138 -- # local -ga mlx 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:20.363 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:20.363 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:20.363 Found net devices under 0000:86:00.0: cvl_0_0 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:20.363 Found net devices under 0000:86:00.1: cvl_0_1 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:20.363 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # is_hw=yes 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@247 -- # create_target_ns 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@28 -- # local -g _dev 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@11 -- # local val=167772161 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:20.364 10.0.0.1 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@11 -- # local val=167772162 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:20.364 10.0.0.2 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:20.364 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:20.365 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:20.365 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.455 ms 00:30:20.365 00:30:20.365 --- 10.0.0.1 ping statistics --- 00:30:20.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.365 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=target0 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:30:20.365 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:20.365 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:30:20.365 00:30:20.365 --- 10.0.0.2 ping statistics --- 00:30:20.365 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:20.365 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # (( pair++ )) 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # return 0 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=initiator1 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # return 1 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev= 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@160 -- # return 0 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=target0 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # get_net_dev target1 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # local dev=target1 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # return 1 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@159 -- # dev= 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@160 -- # return 0 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:30:20.365 ' 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:20.365 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:20.366 ************************************ 00:30:20.366 START TEST nvmf_target_disconnect_tc1 00:30:20.366 ************************************ 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:20.366 [2024-11-20 08:27:33.683357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:20.366 [2024-11-20 08:27:33.683474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2168ab0 with addr=10.0.0.2, port=4420 00:30:20.366 [2024-11-20 08:27:33.683526] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:30:20.366 [2024-11-20 08:27:33.683563] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:20.366 [2024-11-20 08:27:33.683584] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:30:20.366 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:30:20.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:30:20.366 Initializing NVMe Controllers 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:20.366 00:30:20.366 real 0m0.116s 00:30:20.366 user 0m0.046s 00:30:20.366 sys 0m0.070s 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:30:20.366 ************************************ 00:30:20.366 END TEST nvmf_target_disconnect_tc1 00:30:20.366 ************************************ 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:20.366 ************************************ 00:30:20.366 START TEST nvmf_target_disconnect_tc2 00:30:20.366 ************************************ 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@328 -- # nvmfpid=1860561 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@329 -- # waitforlisten 1860561 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1860561 ']' 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:20.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:20.366 08:27:33 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.366 [2024-11-20 08:27:33.824286] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:30:20.366 [2024-11-20 08:27:33.824337] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:20.366 [2024-11-20 08:27:33.885692] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:20.366 [2024-11-20 08:27:33.927957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:20.366 [2024-11-20 08:27:33.927994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:20.366 [2024-11-20 08:27:33.928001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:20.366 [2024-11-20 08:27:33.928007] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:20.366 [2024-11-20 08:27:33.928013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:20.366 [2024-11-20 08:27:33.929690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:20.366 [2024-11-20 08:27:33.929729] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:20.366 [2024-11-20 08:27:33.929836] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:20.366 [2024-11-20 08:27:33.929837] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:20.366 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:20.366 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:20.366 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:20.366 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:20.366 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.366 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:20.366 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:20.366 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.366 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.366 Malloc0 00:30:20.366 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.366 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:20.366 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.366 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.366 [2024-11-20 08:27:34.104549] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:20.366 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.366 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:20.366 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.366 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.366 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.366 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:20.366 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.367 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.367 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.367 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:20.367 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.367 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.367 [2024-11-20 08:27:34.136826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:20.367 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.367 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:20.367 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:20.367 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:20.367 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:20.367 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1860726 00:30:20.367 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:30:20.367 08:27:34 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:22.279 08:27:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1860561 00:30:22.279 08:27:36 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 [2024-11-20 08:27:36.164881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Write completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 [2024-11-20 08:27:36.165084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.279 starting I/O failed 00:30:22.279 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 [2024-11-20 08:27:36.165283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Read completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 Write completed with error (sct=0, sc=8) 00:30:22.280 starting I/O failed 00:30:22.280 [2024-11-20 08:27:36.165474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:30:22.280 [2024-11-20 08:27:36.165695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.280 [2024-11-20 08:27:36.165723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.280 qpair failed and we were unable to recover it. 00:30:22.280 [2024-11-20 08:27:36.165883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.280 [2024-11-20 08:27:36.165902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.280 qpair failed and we were unable to recover it. 00:30:22.280 [2024-11-20 08:27:36.166087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.280 [2024-11-20 08:27:36.166123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.280 qpair failed and we were unable to recover it. 00:30:22.280 [2024-11-20 08:27:36.166373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.280 [2024-11-20 08:27:36.166408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.280 qpair failed and we were unable to recover it. 00:30:22.280 [2024-11-20 08:27:36.166593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.280 [2024-11-20 08:27:36.166627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.280 qpair failed and we were unable to recover it. 00:30:22.280 [2024-11-20 08:27:36.166868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.280 [2024-11-20 08:27:36.166903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.280 qpair failed and we were unable to recover it. 00:30:22.280 [2024-11-20 08:27:36.167117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.280 [2024-11-20 08:27:36.167151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.280 qpair failed and we were unable to recover it. 00:30:22.280 [2024-11-20 08:27:36.167378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.280 [2024-11-20 08:27:36.167391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.280 qpair failed and we were unable to recover it. 00:30:22.280 [2024-11-20 08:27:36.167539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.280 [2024-11-20 08:27:36.167573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.280 qpair failed and we were unable to recover it. 00:30:22.280 [2024-11-20 08:27:36.167842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.280 [2024-11-20 08:27:36.167876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.280 qpair failed and we were unable to recover it. 00:30:22.280 [2024-11-20 08:27:36.168066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.280 [2024-11-20 08:27:36.168100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.280 qpair failed and we were unable to recover it. 00:30:22.280 [2024-11-20 08:27:36.168290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.168325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.168521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.168555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.168754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.168788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.168924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.168957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.169221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.169258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.169436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.169468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.169694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.169728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.169945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.169978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.170241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.170278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.170440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.170453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.170604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.170637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.170773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.170807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.171067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.171101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.171278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.171291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.171401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.171434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.171630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.171663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.171916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.171951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.172096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.172137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.172445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.172483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.172759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.172793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.172992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.173025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.173182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.173228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.173417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.173451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.173683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.173695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.173854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.173886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.174018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.174051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.174239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.174275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.174525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.174538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.174681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.174693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.174833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.174846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.281 [2024-11-20 08:27:36.175041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.281 [2024-11-20 08:27:36.175054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.281 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.175129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.175140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.175332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.175344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.175428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.175438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.175653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.175665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.175732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.175743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.175874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.175885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.176033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.176045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.176109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.176121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.176289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.176304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.176405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.176420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.176501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.176516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.176648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.176663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.176809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.176844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.177123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.177158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.177305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.177339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.177439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.177452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.177600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.177616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.177867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.177909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.178086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.178119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.178366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.178402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.178503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.178518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.178651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.178666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.178817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.178832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.179002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.179035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.179216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.179251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.179429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.179461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.179631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.179649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.179875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.179909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.180081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.180116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.180242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.180277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.282 [2024-11-20 08:27:36.180471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.282 [2024-11-20 08:27:36.180487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.282 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.180588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.180602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.180686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.180700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.180855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.180898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.181072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.181105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.181307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.181342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.181478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.181494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.181717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.181752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.181859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.181891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.182007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.182041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.182235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.182271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.182389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.182423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.182550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.182584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.182759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.182793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.182989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.183022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.183211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.183228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.183372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.183388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.183594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.183627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.183900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.183935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.184035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.184068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.184262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.184297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.184497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.184536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.184657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.184691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.184894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.184929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.185137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.185171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.185371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.185410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.185656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.185689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.185796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.185828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.185938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.185979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.186245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.186281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.186404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.186438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.186561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.186594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.186780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.283 [2024-11-20 08:27:36.186814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.283 qpair failed and we were unable to recover it. 00:30:22.283 [2024-11-20 08:27:36.186955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.186988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.187239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.187274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.187396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.187430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.187546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.187585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.187775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.187809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.187937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.187971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.188178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.188220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.188353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.188387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.188558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.188592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.188808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.188842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.188974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.189008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.189196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.189237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.189429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.189462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.189655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.189687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.189934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.189966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.190151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.190185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.190304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.190338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.190584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.190618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.190738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.190771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.190907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.190940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.191054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.191088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.191213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.191247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.191433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.191466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.284 [2024-11-20 08:27:36.191598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.284 [2024-11-20 08:27:36.191630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.284 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.191802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.191836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.192101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.192134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.192267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.192302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.192503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.192536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.192715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.192748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.192912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.192944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.193195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.193239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.193434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.193468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.193659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.193693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.193807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.193839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.194031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.194065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.194250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.194284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.194557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.194590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.194857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.194891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.195159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.195192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.195481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.195515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.195786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.195819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.196106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.196138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.196398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.196433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.196627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.196660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.197047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.197131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.197367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.197407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.197644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.197677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.197920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.197953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.198223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.198257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.198456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.198489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.198750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.198783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.198921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.198954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.199191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.199235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.199430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.199463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.199652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.199684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.199924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.199956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.200156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.200190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.200392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.285 [2024-11-20 08:27:36.200434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.285 qpair failed and we were unable to recover it. 00:30:22.285 [2024-11-20 08:27:36.200671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.200704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.200948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.200981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.201159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.201191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.201438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.201473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.201761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.201795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.201966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.201999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.202182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.202230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.202495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.202529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.202662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.202694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.202932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.202965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.203212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.203248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.203488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.203522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.203718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.203751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.203947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.203981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.204226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.204262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.204535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.204568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.204761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.204793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.205062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.205096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.205278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.205312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.205489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.205522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.205769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.205803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.206067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.206101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.206347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.206382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.206644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.206678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.206905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.206938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.207068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.207101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.207289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.207324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.207512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.207545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.207751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.207783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.208049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.208083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.208260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.208295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.208442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.208475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.208595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.208629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.286 [2024-11-20 08:27:36.208891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.286 [2024-11-20 08:27:36.208924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.286 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.209123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.209158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.209418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.209453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.209743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.209776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.209964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.209997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.210188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.210231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.210343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.210382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.210574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.210608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.210867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.210901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.211094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.211128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.211308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.211343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.211605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.211638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.211838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.211871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.212066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.212100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.212292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.212327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.212526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.212560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.212829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.212863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.213033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.213067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.213198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.213242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.213510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.213543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.213839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.213873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.214133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.214167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.214374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.214410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.214673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.214706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.214948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.214982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.215166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.215200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.215403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.215437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.215705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.215738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.215863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.215896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.287 [2024-11-20 08:27:36.216138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.287 [2024-11-20 08:27:36.216171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.287 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.216380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.216415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.216592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.216625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.216890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.216923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.217186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.217274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.217490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.217529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.217676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.217710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.217973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.218008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.218251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.218286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.218496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.218529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.218792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.218827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.219114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.219147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.219300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.219334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.219602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.219636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.219904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.219937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.220198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.220240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.220500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.220534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.220724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.220767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.220959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.220993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.221179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.221221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.221489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.221523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.221706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.221739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.222004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.222038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.222328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.222364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.222627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.222661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.222950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.222985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.223255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.223289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.223572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.223606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.223745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.223778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.223967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.224001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.224268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.224304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.224495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.224529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.224714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.224748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.225000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.288 [2024-11-20 08:27:36.225034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.288 qpair failed and we were unable to recover it. 00:30:22.288 [2024-11-20 08:27:36.225324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.225359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.225629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.225663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.225931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.225964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.226098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.226132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.226325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.226361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.226601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.226634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.226775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.226810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.227104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.227138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.227356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.227392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.227586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.227620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.227972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.228046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.228355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.228394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.228589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.228624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.228893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.228926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.229136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.229170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.229411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.229446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.229638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.229672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.229863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.229896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.230156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.230191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.230488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.230522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.230791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.230824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.231110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.231144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.231366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.231401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.231524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.231557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.231831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.231864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.232056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.232089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.232334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.232370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.232557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.232594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.232832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.232863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.233056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.233089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.233332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.233367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.289 qpair failed and we were unable to recover it. 00:30:22.289 [2024-11-20 08:27:36.233508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.289 [2024-11-20 08:27:36.233541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.233727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.233761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.233961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.233994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.234178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.234219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.234408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.234441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.234646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.234680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.234986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.235025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.235228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.235264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.235557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.235589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.235759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.235791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.235967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.236000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.236313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.236347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.236633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.236666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.236913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.236947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.237211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.237245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.237454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.237488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.237684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.237717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.237975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.238008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.238261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.238296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.238483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.238517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.238795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.238830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.239005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.239038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.239223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.239259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.239502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.239535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.239748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.239782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.240039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.240072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.240323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.240358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.240645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.290 [2024-11-20 08:27:36.240678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.290 qpair failed and we were unable to recover it. 00:30:22.290 [2024-11-20 08:27:36.240896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.240928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.241172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.241215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.241410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.241444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.241684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.241718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.241973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.242007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.242298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.242346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.242541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.242574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.242758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.242792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.242964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.242998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.243252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.243287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.243530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.243562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.243777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.243811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.244099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.244131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.244325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.244360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.244623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.244656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.244946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.244979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.245199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.245241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.245507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.245541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.245712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.245746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.245959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.245994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.246252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.246287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.246539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.246573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.246709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.246742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.247004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.247038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.247326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.247362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.247516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.247549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.247821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.247854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.248094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.248128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.248301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.248335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.248625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.248658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.248867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.248900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.291 [2024-11-20 08:27:36.249077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.291 [2024-11-20 08:27:36.249111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.291 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.249290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.249324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.249579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.249614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.249796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.249831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.250124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.250158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.250385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.250420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.250670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.250703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.250958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.250992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.251286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.251322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.251591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.251625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.251899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.251932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.252222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.252258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.252525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.252558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.252844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.252878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.253090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.253124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.253391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.253427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.253698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.253732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.253935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.253970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.254167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.254214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.254408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.254441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.254659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.254694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.254903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.254937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.255178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.255224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.255470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.255503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.255694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.255728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.255907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.255940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.256083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.256117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.256309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.256346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.256528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.256560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.256842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.256876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.257139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.257173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.257465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.257500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.257771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.292 [2024-11-20 08:27:36.257805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.292 qpair failed and we were unable to recover it. 00:30:22.292 [2024-11-20 08:27:36.258091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.258124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.258353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.258388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.258592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.258626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.258847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.258880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.259140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.259174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.259391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.259425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.259561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.259596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.259855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.259888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.260023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.260057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.260319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.260360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.260617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.260652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.260896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.260930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.261159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.261192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.261499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.261534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.261811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.261845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.262120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.262155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.262388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.262423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.262668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.262700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.262878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.262913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.263090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.263125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.263402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.263437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.263700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.263735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.264026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.264061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.264326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.264362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.264654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.264688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.264962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.264996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.265200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.265243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.265438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.265472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.265718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.293 [2024-11-20 08:27:36.265751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.293 qpair failed and we were unable to recover it. 00:30:22.293 [2024-11-20 08:27:36.266002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.266036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.266284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.266320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.266565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.266599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.266777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.266811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.266950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.266985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.267252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.267287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.267424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.267457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.267670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.267710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.268012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.268046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.268233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.268270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.268566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.268600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.268884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.268919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.269189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.269233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.269513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.269547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.269811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.269845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.270114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.270149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.270356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.270391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.270660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.270694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.270899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.270933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.271224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.271258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.271529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.271563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.271861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.271896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.272114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.272148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.272406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.272442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.272697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.272731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.272844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.272877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.273142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.273176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.273316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.273352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.273543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.273576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.294 qpair failed and we were unable to recover it. 00:30:22.294 [2024-11-20 08:27:36.273795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.294 [2024-11-20 08:27:36.273828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.274074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.274108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.274313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.274349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.274613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.274646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.274843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.274878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.275096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.275135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.275323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.275358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.275557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.275591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.275840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.275873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.276139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.276173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.276591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.276681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.276956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.276995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.277280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.277317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.277537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.277572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.277888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.277921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.278153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.278187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.278338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.278372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.278646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.278680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.278914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.278947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.279274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.279311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.279585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.279620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.279809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.279841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.280099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.280133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.280351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.280386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.280601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.280634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.280860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.280894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.281167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.281209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.281411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.281445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.281588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.295 [2024-11-20 08:27:36.281621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.295 qpair failed and we were unable to recover it. 00:30:22.295 [2024-11-20 08:27:36.281814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.281848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.282100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.282134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.282383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.282418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.282608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.282649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.282923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.282957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.283218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.283253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.283549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.283583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.283844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.283879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.284128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.284162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.284467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.284501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.284789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.284823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.285026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.285060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.285335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.285371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.285642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.285676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.285875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.285909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.286177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.286235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.286524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.286558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.286748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.286782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.287041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.287075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.287323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.287358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.287594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.287628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.287873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.287907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.288034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.288068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.288338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.288373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.288637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.288670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.288952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.288986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.289325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.289361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.289639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.289675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.289865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.289899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.290153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.290187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.290408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.290443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.290697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.290731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.290920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.290954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.291157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.296 [2024-11-20 08:27:36.291190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.296 qpair failed and we were unable to recover it. 00:30:22.296 [2024-11-20 08:27:36.291463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-11-20 08:27:36.291497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-11-20 08:27:36.291712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-11-20 08:27:36.291746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-11-20 08:27:36.291896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-11-20 08:27:36.291930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-11-20 08:27:36.292133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-11-20 08:27:36.292167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-11-20 08:27:36.292457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-11-20 08:27:36.292492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-11-20 08:27:36.292703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-11-20 08:27:36.292737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-11-20 08:27:36.292985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-11-20 08:27:36.293019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-11-20 08:27:36.293269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-11-20 08:27:36.293306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-11-20 08:27:36.293423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-11-20 08:27:36.293455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-11-20 08:27:36.293680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-11-20 08:27:36.293719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.297 [2024-11-20 08:27:36.293907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.297 [2024-11-20 08:27:36.293940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.297 qpair failed and we were unable to recover it. 00:30:22.574 [2024-11-20 08:27:36.294135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.574 [2024-11-20 08:27:36.294171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.574 qpair failed and we were unable to recover it. 00:30:22.574 [2024-11-20 08:27:36.294462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.574 [2024-11-20 08:27:36.294496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.574 qpair failed and we were unable to recover it. 00:30:22.574 [2024-11-20 08:27:36.294714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.574 [2024-11-20 08:27:36.294748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.574 qpair failed and we were unable to recover it. 00:30:22.574 [2024-11-20 08:27:36.294966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.574 [2024-11-20 08:27:36.295000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.574 qpair failed and we were unable to recover it. 00:30:22.574 [2024-11-20 08:27:36.295300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.574 [2024-11-20 08:27:36.295336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.574 qpair failed and we were unable to recover it. 00:30:22.574 [2024-11-20 08:27:36.295562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.574 [2024-11-20 08:27:36.295596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.574 qpair failed and we were unable to recover it. 00:30:22.574 [2024-11-20 08:27:36.295777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.574 [2024-11-20 08:27:36.295812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.574 qpair failed and we were unable to recover it. 00:30:22.574 [2024-11-20 08:27:36.296073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.574 [2024-11-20 08:27:36.296108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.574 qpair failed and we were unable to recover it. 00:30:22.574 [2024-11-20 08:27:36.296357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.574 [2024-11-20 08:27:36.296394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.574 qpair failed and we were unable to recover it. 00:30:22.574 [2024-11-20 08:27:36.296614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.574 [2024-11-20 08:27:36.296647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.574 qpair failed and we were unable to recover it. 00:30:22.574 [2024-11-20 08:27:36.296894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.574 [2024-11-20 08:27:36.296927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.574 qpair failed and we were unable to recover it. 00:30:22.574 [2024-11-20 08:27:36.297220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.574 [2024-11-20 08:27:36.297255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.574 qpair failed and we were unable to recover it. 00:30:22.574 [2024-11-20 08:27:36.297561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.297596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.297834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.297867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.298189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.298237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.298438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.298471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.298746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.298780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.298898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.298932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.299179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.299240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.299436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.299470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.299733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.299766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.300064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.300099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.300365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.300401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.300683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.300716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.301007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.301041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.301313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.301348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.301642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.301675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.301947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.301981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.302244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.302279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.302576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.302609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.302814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.302847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.303066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.303101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.303376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.303412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.303665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.303698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.303999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.304033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.304224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.304259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.304542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.304575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.304839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.304873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.305168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.305217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.305413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.305448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.305725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.305760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.306005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.306040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.306227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.306262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.306543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.306577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.306840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.306874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.307162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.307196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.307350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.307385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.307635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.307669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.307993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.308027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.575 [2024-11-20 08:27:36.308310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.575 [2024-11-20 08:27:36.308346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.575 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.308489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.308523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.308667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.308702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.308856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.308890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.309184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.309228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.309512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.309546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.309822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.309855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.310114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.310149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.310456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.310493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.310771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.310805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.311086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.311120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.311404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.311441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.311643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.311677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.311979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.312013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.312142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.312176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.312471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.312505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.312791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.312825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.313097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.313133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.313421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.313456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.313731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.313764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.314053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.314087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.314276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.314312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.314592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.314627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.314849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.314884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.315185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.315239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.315480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.315515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.315700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.315734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.315955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.315989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.316267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.316303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.316583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.316623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.316897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.316931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.317183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.317229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.317439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.317474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.317747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.317781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.317932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.317967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.318226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.318263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.318446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.318480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.318760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.318794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.576 qpair failed and we were unable to recover it. 00:30:22.576 [2024-11-20 08:27:36.319105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.576 [2024-11-20 08:27:36.319140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.319448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.319483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.319769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.319805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.320075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.320111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.320398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.320435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.320708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.320744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.321026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.321061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.321342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.321378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.321601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.321636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.321827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.321863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.322152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.322187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.322381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.322416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.322621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.322657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.322891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.322926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.323156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.323190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.323507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.323543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.323767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.323800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.323998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.324032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.324324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.324361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.324609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.324644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.324923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.324958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.325242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.325278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.325489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.325524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.325806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.325841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.326036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.326071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.326373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.326410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.326693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.326727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.327003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.327038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.327327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.327363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.327639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.327674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.327873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.327906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.328188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.328257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.328544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.328578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.328724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.328758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.329012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.329046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.329350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.329385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.329668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.329703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.329981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.577 [2024-11-20 08:27:36.330015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.577 qpair failed and we were unable to recover it. 00:30:22.577 [2024-11-20 08:27:36.330292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.330328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.330488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.330523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.330798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.330832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.331086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.331119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.331378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.331414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.331619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.331654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.331786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.331821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.332100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.332134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.332389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.332424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.332730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.332763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.333060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.333093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.333322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.333361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.333569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.333604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.333878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.333913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.334146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.334180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.334467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.334501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.334689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.334724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.334860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.334893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.335084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.335119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.335327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.335362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.335559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.335594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.335851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.335885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.336017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.336052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.336168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.336212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.336488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.336522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.336800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.336834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.336961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.336995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.337255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.337292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.337490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.337525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.337719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.337754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.338006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.338041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.338298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.338334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.338547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.338580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.338809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.338849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.339105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.339141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.339437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.339472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.339768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.339802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.578 qpair failed and we were unable to recover it. 00:30:22.578 [2024-11-20 08:27:36.339918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.578 [2024-11-20 08:27:36.339951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.340249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.340286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.340546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.340580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.340850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.340885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.341162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.341196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.341485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.341520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.341647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.341681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.341939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.341972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.342250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.342286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.342570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.342604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.342881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.342915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.343106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.343140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.343430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.343466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.343722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.343756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.344014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.344048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.344320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.344356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.344615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.344648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.344943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.344977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.345212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.345249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.345534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.345569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.345836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.345869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.346164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.346199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.346475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.346509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.346651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.346685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.346963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.346998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.347296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.347332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.347598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.347631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.347916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.347951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.348235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.348271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.348551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.348586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.348861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.348897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.349177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.579 [2024-11-20 08:27:36.349222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.579 qpair failed and we were unable to recover it. 00:30:22.579 [2024-11-20 08:27:36.349504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.349539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.349742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.349776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.349993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.350029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.350160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.350196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.350487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.350527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.350745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.350779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.350965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.350998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.351254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.351291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.351549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.351583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.351797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.351832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.352013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.352047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.352325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.352361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.352574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.352609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.352760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.352795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.352994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.353029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.353221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.353256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.353442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.353476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.353685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.353721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.354005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.354039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.354263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.354299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.354526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.354560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.354771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.354805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.354926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.354961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.355258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.355295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.355559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.355594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.355854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.355889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.356079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.356113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.356368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.356404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.356621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.356656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.356914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.356949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.357252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.357290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.357519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.357554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.357871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.357907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.358117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.358151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.358294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.358330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.358616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.358650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.358905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.358940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.359243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.580 [2024-11-20 08:27:36.359279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.580 qpair failed and we were unable to recover it. 00:30:22.580 [2024-11-20 08:27:36.359585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.359619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.359839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.359873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.360164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.360197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.360470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.360505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.360794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.360828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.360965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.360998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.361257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.361299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.361489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.361523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.361779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.361813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.362023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.362057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.362336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.362372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.362655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.362689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.362873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.362907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.363236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.363272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.363529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.363563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.363846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.363881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.364160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.364195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.364481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.364516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.364794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.364828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.365022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.365057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.365340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.365376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.365640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.365674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.365963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.365997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.366190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.366235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.366492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.366526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.366709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.366744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.367029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.367063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.367325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.367361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.367658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.367693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.367895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.367929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.368138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.368173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.368404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.368439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.368718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.368752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.369035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.369070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.369297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.369333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.369523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.369557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.369743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.369777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.370035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.581 [2024-11-20 08:27:36.370070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.581 qpair failed and we were unable to recover it. 00:30:22.581 [2024-11-20 08:27:36.370323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.370359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.370617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.370651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.370905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.370939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.371197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.371247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.371433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.371467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.371675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.371710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.371986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.372022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.372291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.372328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.372623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.372664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.372927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.372962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.373101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.373135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.373416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.373452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.373733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.373767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.373953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.373988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.374255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.374292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.374567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.374601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.374805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.374840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.375043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.375077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.375360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.375396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.375621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.375655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.375861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.375896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.376197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.376245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.376463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.376497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.376728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.376762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.377040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.377075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.377233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.377269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.377478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.377513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.377769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.377804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.377959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.377993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.378300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.378336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.378485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.378520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.378716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.378751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.378947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.378983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.379303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.379340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.379617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.379651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.379843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.379879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.380149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.380184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.582 [2024-11-20 08:27:36.380392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.582 [2024-11-20 08:27:36.380427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.582 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.380691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.380724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.380868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.380903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.381226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.381262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.381550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.381589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.381847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.381881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.382172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.382219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.382504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.382539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.382805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.382840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.383023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.383058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.383271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.383307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.383584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.383625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.383821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.383855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.384089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.384123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.384383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.384420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.384704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.384737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.385031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.385066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.385339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.385375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.385573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.385607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.385869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.385904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.386135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.386169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.386402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.386438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.386713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.386748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.387024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.387059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.387272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.387309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.387499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.387533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.387791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.387826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.388109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.388143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.388450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.388486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.388771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.388805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.389081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.389115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.389329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.389366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.389648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.389682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.389966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.583 [2024-11-20 08:27:36.390000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.583 qpair failed and we were unable to recover it. 00:30:22.583 [2024-11-20 08:27:36.390264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.390300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.390592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.390627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.390841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.390876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.391083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.391117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.391310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.391348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.391532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.391567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.391711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.391746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.392015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.392050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.392309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.392344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.392627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.392661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.392948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.392982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.393261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.393296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.393506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.393540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.393803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.393838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.394140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.394174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.394386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.394422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.394694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.394728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.395016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.395050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.395325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.395362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.395566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.395601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.395879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.395914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.396198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.396244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.396497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.396531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.396829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.396864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.397117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.397151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.397463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.397500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.397775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.397810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.398097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.398132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.398408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.398443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.398724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.398758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.398995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.399030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.399299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.399335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.399551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.399586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.399843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.399877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.400178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.400222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.400497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.400532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.400786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.400821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.401010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.584 [2024-11-20 08:27:36.401045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.584 qpair failed and we were unable to recover it. 00:30:22.584 [2024-11-20 08:27:36.401244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.401281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.401492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.401525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.401786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.401822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.402051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.402085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.402391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.402427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.402705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.402739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.402969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.403011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.403164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.403200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.403485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.403520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.403795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.403830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.404122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.404156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.404415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.404451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.404754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.404789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.404973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.405007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.405260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.405296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.405505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.405540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.405846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.405880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.406067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.406103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.406361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.406397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.406665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.406699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.406988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.407023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.407297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.407333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.407562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.407596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.407806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.407840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.408063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.408097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.408236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.408271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.408547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.408580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.408839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.408875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.409131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.409165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.409475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.409510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.409737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.409771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.409958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.409993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.410268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.410304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.410428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.410463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.410746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.410782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.411085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.411119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.411378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.411415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.585 [2024-11-20 08:27:36.411701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.585 [2024-11-20 08:27:36.411736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.585 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.411924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.411958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.412140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.412174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.412440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.412474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.412759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.412793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.413076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.413110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.413388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.413424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.413640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.413675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.413879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.413914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.414116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.414158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.414376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.414412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.414606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.414641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.414922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.414956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.415247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.415283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.415556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.415589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.415791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.415825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.416088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.416123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.416422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.416457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.416643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.416676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.416861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.416895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.417171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.417224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.417341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.417375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.417579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.417614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.417760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.417796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.418080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.418114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.418415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.418451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.418715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.418749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.419028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.419062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.419352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.419388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.419574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.419608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.419888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.419922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.420184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.420228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.420520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.420554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.420820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.420854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.421151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.421184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.421454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.421489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.421792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.421827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.422031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.422065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.586 [2024-11-20 08:27:36.422274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.586 [2024-11-20 08:27:36.422310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.586 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.422616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.422650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.422855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.422890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.423167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.423213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.423488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.423521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.423726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.423760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.424040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.424074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.424199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.424247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.424546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.424580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.424724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.424758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.425031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.425065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.425271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.425312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.425500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.425536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.425818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.425851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.426065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.426098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.426390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.426426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.426555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.426587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.426894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.426929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.427190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.427233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.427495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.427530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.427734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.427768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.428021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.428055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.428264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.428300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.428558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.428592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.428794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.428828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.429026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.429061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.429320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.429356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.429663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.429698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.429962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.429996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.430253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.430289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.430496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.430531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.430723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.430756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.431045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.431080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.431378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.431413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.431620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.431655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.431937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.431972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.432227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.432263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.432477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.432512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.587 qpair failed and we were unable to recover it. 00:30:22.587 [2024-11-20 08:27:36.432791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.587 [2024-11-20 08:27:36.432826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.432974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.433009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.433231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.433267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.433546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.433581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.433703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.433738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.433945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.433979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.434255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.434291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.434593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.434627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.434771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.434806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.434993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.435028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.435228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.435265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.435462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.435496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.435779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.435814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.436023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.436063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.436258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.436294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.436550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.436584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.436840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.436873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.437091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.437124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.437391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.437428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.437730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.437765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.438031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.438065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.438288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.438324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.438551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.438585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.438869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.438904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.439106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.439142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.439433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.439469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.439739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.439773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.440068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.440103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.440372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.440409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.440596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.440631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.440833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.440868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.441129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.441164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.441453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.441489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.441728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.588 [2024-11-20 08:27:36.441762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.588 qpair failed and we were unable to recover it. 00:30:22.588 [2024-11-20 08:27:36.441969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.442004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.442280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.442317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.442603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.442636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.442912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.442946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.443161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.443195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.443462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.443497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.443775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.443810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.444041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.444075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.444279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.444315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.444598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.444632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.444841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.444876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.445151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.445186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.445475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.445509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.445787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.445822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.446064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.446099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.446288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.446325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.446532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.446566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.446766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.446801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.447061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.447095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.447394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.447440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.447702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.447736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.448025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.448059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.448313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.448349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.448656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.448690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.448948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.448982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.449236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.449272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.449575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.449610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.449869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.449904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.450088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.450121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.450247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.450283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.450557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.450593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.450862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.450898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.451184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.451229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.451477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.451512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.451813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.451846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.452042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.452076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.452333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.452369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.589 [2024-11-20 08:27:36.452671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.589 [2024-11-20 08:27:36.452706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.589 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.452838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.452873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.453159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.453192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.453332] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x192daf0 is same with the state(6) to be set 00:30:22.590 [2024-11-20 08:27:36.453616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.453696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.453997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.454038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.454248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.454285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.454588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.454624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.454887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.454920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.455174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.455227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.455513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.455548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.455827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.455860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.456143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.456179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.456462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.456497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.456753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.456788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.457089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.457123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.457386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.457422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.457628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.457662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.457807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.457841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.458045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.458080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.458281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.458318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.458509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.458543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.458834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.458870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.459139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.459174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.459381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.459416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.459679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.459713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.459845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.459878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.460064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.460099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.460311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.460347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.460543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.460576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.460833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.460868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.461144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.461179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.461404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.461438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.461696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.461729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.461925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.461959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.462243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.462279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.462554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.462589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.462781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.462816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.463088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.590 [2024-11-20 08:27:36.463122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.590 qpair failed and we were unable to recover it. 00:30:22.590 [2024-11-20 08:27:36.463318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.463354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.463617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.463651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.463936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.463971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.464275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.464311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.464507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.464540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.464821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.464854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.465180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.465223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.465508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.465542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.465809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.465843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.466144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.466178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.466388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.466429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.466629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.466663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.466946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.466981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.467262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.467297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.467578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.467613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.467896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.467931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.468217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.468252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.468527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.468563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.468843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.468877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.469128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.469162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.469459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.469495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.469772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.469806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.470058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.470092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.470395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.470431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.470694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.470729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.471012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.471047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.471256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.471293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.471491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.471525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.471803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.471838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.472092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.472126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.472335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.472371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.472562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.472596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.472729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.472762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.473041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.473076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.473288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.473324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.473532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.473566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.473763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.591 [2024-11-20 08:27:36.473797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.591 qpair failed and we were unable to recover it. 00:30:22.591 [2024-11-20 08:27:36.473997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.474045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.474234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.474270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.474460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.474494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.474771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.474806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.475084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.475118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.475376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.475412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.475668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.475702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.475910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.475944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.476132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.476167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.476459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.476496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.476751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.476785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.476994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.477028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.477293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.477328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.477612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.477646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.477837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.477872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.478058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.478092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.478369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.478404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.478674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.478708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.478979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.479014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.479209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.479245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.479435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.479470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.479675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.479711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.479990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.480025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.480225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.480261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.480539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.480573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.480780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.480815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.481017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.481052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.481244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.481282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.481558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.481592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.481863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.481898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.482189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.482233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.482494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.482528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.482825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.482858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.483124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.483158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.483429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.483464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.483750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.483784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.592 qpair failed and we were unable to recover it. 00:30:22.592 [2024-11-20 08:27:36.484085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.592 [2024-11-20 08:27:36.484120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.484325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.484360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.484592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.484628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.484820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.484855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.485111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.485152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.485373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.485408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.485701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.485734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.485923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.485957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.486227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.486263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.486559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.486594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.486776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.486810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.487019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.487053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.487238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.487274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.487592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.487627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.487810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.487844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.488103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.488137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.488460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.488496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.488798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.488833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.489046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.489081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.489263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.489299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.489574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.489608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.489879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.489913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.490222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.490257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.490540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.490575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.490852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.490886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.491083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.491124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.491423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.491459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.491721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.491756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.491956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.491990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.492258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.492293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.492584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.492618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.492834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.492869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.492997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.493032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.493310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.493346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.493612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.593 [2024-11-20 08:27:36.493647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.593 qpair failed and we were unable to recover it. 00:30:22.593 [2024-11-20 08:27:36.493847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.493881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.494146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.494181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.494388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.494424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.494631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.494664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.494942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.494977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.495122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.495157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.495372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.495408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.495598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.495632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.495831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.495866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.496142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.496181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.496404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.496440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.496764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.496798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.496939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.496973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.497224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.497260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.497557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.497593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.497797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.497831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.498030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.498065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.498345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.498381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.498681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.498715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.498975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.499010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.499312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.499348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.499607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.499641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.499948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.499982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.500269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.500306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.500579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.500613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.500741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.500774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.501001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.501036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.501289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.501326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.501601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.501636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.501892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.501927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.502224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.502261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.502551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.502585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.502857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.502892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.503181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.503224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.503493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.503527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.503812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.503849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.504125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.504159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.594 [2024-11-20 08:27:36.504298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.594 [2024-11-20 08:27:36.504332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.594 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.504607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.504643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.504828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.504861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.505043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.505078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.505353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.505389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.505620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.505654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.505833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.505867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.506081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.506117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.506374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.506410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.506669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.506703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.507003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.507037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.507164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.507199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.507503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.507543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.507800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.507836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.508127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.508161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.508400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.508436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.508716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.508750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.509005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.509040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.509226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.509263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.509549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.509583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.509879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.509914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.510106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.510140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.510335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.510370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.510646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.510681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.510885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.510919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.511130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.511164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.511397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.511433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.511711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.511746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.512022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.512055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.512319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.512355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.512681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.512716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.512975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.513009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.513315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.513351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.513592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.513627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.513853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.513888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.514142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.514177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.514479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.514513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.514776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.514811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.595 [2024-11-20 08:27:36.515034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.595 [2024-11-20 08:27:36.515069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.595 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.515382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.515420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.515719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.515754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.515956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.515991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.516192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.516234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.516460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.516494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.516681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.516718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.516997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.517032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.517167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.517211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.517471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.517506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.517649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.517682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.517939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.517974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.518161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.518214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.518479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.518514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.518704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.518746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.519017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.519055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.519335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.519372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.519651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.519686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.519963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.519998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.520200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.520242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.520382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.520418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.520649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.520686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.520969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.521006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.521216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.521253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.521458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.521492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.521678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.521712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.521992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.522026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.522246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.522283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.522439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.522473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.522749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.522784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.523061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.523096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.523308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.523344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.523543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.523578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.523761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.523795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.523989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.524024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.524234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.524270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.524471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.524506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.524702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.524735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.596 qpair failed and we were unable to recover it. 00:30:22.596 [2024-11-20 08:27:36.524927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.596 [2024-11-20 08:27:36.524962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.525239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.525276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.525462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.525497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.525644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.525678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.525807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.525845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.526033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.526067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.526331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.526364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.526658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.526692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.527011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.527046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.527326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.527362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.527548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.527580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.527847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.527882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.528023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.528055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.528325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.528361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.528513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.528550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.528707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.528739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.528929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.528973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.529233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.529269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.529452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.529484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.529763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.529798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.530085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.530120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.530388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.530423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.530607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.530638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.530835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.530873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.531197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.531258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.531572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.531607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.531889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.531924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.532150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.532186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.532398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.532434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.532621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.532656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.532883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.532917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.533049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.533084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.533290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.533327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.533536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.597 [2024-11-20 08:27:36.533573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.597 qpair failed and we were unable to recover it. 00:30:22.597 [2024-11-20 08:27:36.533792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.533828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.534032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.534067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.534327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.534364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.534654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.534693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.535000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.535035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.535290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.535326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.535520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.535555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.535832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.535868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.536119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.536155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.536321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.536357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.536601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.536636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.536778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.536813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.537092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.537127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.537331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.537369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.537567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.537602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.537813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.537848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.538106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.538140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.538286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.538322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.538584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.538619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.538874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.538909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.539046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.539080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.539331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.539368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.539668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.539709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.539996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.540032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.540234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.540271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.540528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.540564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.540777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.540812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.541066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.541109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.541384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.541421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.541591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.541625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.541907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.541941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.542244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.542280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.542474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.542510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.542738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.542774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.543051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.543088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.543355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.543391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.543587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.543623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.598 [2024-11-20 08:27:36.543769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.598 [2024-11-20 08:27:36.543804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.598 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.544031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.544066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.544253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.544289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.544564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.544602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.544858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.544894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.545098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.545133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.545366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.545409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.545694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.545729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.546009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.546044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.546241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.546278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.546488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.546523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.546774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.546808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.547063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.547098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.547285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.547322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.547574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.547609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.547744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.547778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.547891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.547926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.548213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.548248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.548434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.548468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.548726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.548760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.548947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.548983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.549191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.549237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.549515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.549549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.549704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.549740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.549883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.549917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.550189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.550241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.550451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.550486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.550688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.550722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.551018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.551053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.551345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.551383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.551571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.551605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.551758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.551792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.552084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.552119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.552306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.552342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.552627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.552662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.552952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.552987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.553262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.553298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.553483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.599 [2024-11-20 08:27:36.553517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.599 qpair failed and we were unable to recover it. 00:30:22.599 [2024-11-20 08:27:36.553771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.553806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.554114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.554150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.554424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.554460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.554712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.554747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.555058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.555093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.555309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.555346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.555602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.555636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.555842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.555877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.556002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.556036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.556337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.556373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.556599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.556633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.556791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.556826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.557048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.557083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.557220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.557256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.557545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.557623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.557931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.557970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.558254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.558293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.558586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.558623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.558910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.558947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.559223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.559259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.559478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.559515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.559827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.559869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.560149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.560186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.560347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.560384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.560662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.560700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.560975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.561012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.561216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.561253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.561408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.561450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.561683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.561721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.561979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.562017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.562326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.562371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.562565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.562601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.562801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.562838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.563122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.563160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.563367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.563405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.563614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.563652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.563876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.563914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.600 [2024-11-20 08:27:36.564200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.600 [2024-11-20 08:27:36.564246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.600 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.564499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.564536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.564838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.564875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.565157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.565196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.565343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.565381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.565670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.565705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.565847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.565881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.566182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.566240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.566496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.566531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.566839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.566874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.567075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.567110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.567326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.567362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.567504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.567538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.567733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.567768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.568060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.568096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.568280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.568316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.568520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.568555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.568748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.568782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.569061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.569096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.569357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.569393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.569606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.569641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.569842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.569876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.570132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.570166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.570396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.570432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.570711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.570745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.570883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.570918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.571117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.571152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.571457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.571493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.571763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.571798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.572010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.572044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.572321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.572357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.572503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.572538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.572842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.572876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.572993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.601 [2024-11-20 08:27:36.573024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.601 qpair failed and we were unable to recover it. 00:30:22.601 [2024-11-20 08:27:36.573165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.573200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.573482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.573516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.573818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.573853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.574058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.574094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.574430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.574467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.574728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.574762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.574986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.575020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.575380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.575417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.575607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.575642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.575855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.575889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.576091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.576131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.576266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.576303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.576572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.576607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.576792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.576827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.577108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.577142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.577373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.577409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.577600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.577634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.577853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.577887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.578072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.578107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.578291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.578327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.578607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.578641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.578773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.578807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.579067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.579101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.579372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.579408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.579625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.579661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.579932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.579966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.580157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.580190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.602 qpair failed and we were unable to recover it. 00:30:22.602 [2024-11-20 08:27:36.580414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.602 [2024-11-20 08:27:36.580449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.603 qpair failed and we were unable to recover it. 00:30:22.603 [2024-11-20 08:27:36.580706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.603 [2024-11-20 08:27:36.580742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.603 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.580943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.580979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.581257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.581295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.581551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.581587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.581785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.581819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.582018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.582053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.582337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.582373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.582653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.582687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.582944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.582979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.583267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.583303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.583507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.583541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.583739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.583774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.583957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.583991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.584251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.584287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.584488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.584522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.584801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.584835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.585106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.585140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.585429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.585465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.585649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.585684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.585936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.585971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.586256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.586292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.586588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.586624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.586890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.586930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.587198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.587253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.587526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.587561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.587849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.587883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.588088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.588122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.588329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.588365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.588546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.588581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.588853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.588890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.589098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.589132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.589432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.589468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.589740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.589774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.589966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.590001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.881 [2024-11-20 08:27:36.590220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.881 [2024-11-20 08:27:36.590256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.881 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.590558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.590593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.590880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.590916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.591192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.591251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.591511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.591547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.591818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.591852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.592051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.592085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.592360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.592396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.592682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.592716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.592993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.593028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.593256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.593292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.593547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.593582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.593800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.593835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.594091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.594126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.594325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.594361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.594558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.594592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.594862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.594897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.595039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.595074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.595389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.595426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.595664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.595698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.596010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.596045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.596305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.596342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.596541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.596575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.596843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.596878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.597077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.597111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.597296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.597332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.597603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.597639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.597860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.597896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.598153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.598193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.598461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.598496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.598690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.598725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.599003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.599039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.599231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.599267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.599576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.599610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.599883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.599918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.600172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.600219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.600515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.600549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.600825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.600860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.882 qpair failed and we were unable to recover it. 00:30:22.882 [2024-11-20 08:27:36.601076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.882 [2024-11-20 08:27:36.601110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.601387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.601423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.601628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.601662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.601970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.602005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.602144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.602179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.602506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.602541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.602744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.602779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.602965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.603001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.603285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.603321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.603575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.603611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.603909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.603944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.604214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.604250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.604538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.604573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.604755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.604789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.605067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.605103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.605390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.605427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.605701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.605737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.606021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.606056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.606336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.606373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.606596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.606631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.606929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.606963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.607259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.607296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.607511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.607544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.607687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.607722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.607974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.608009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.608216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.608250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.608540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.608574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.608770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.608806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.609010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.609044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.609324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.609360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.609545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.609587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.609851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.609886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.610099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.610134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.610374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.610411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.610600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.610635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.610916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.610950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.611135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.611171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.611430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.883 [2024-11-20 08:27:36.611464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.883 qpair failed and we were unable to recover it. 00:30:22.883 [2024-11-20 08:27:36.611611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.611645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.611851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.611886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.612176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.612219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.612438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.612473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.612607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.612642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.612866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.612901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.613134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.613169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.613374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.613410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.613713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.613748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.614033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.614068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.614364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.614400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.614665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.614699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.614970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.615005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.615312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.615348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.615630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.615665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.615942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.615977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.616235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.616272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.616475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.616510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.616637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.616672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.616886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.616921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.617062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.617097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.617362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.617399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.617606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.617641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.617899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.617934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.618136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.618171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.618455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.618491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.618622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.618658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.618914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.618949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.619132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.619167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.619377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.619414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.619567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.619602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.619852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.619887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.620150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.620192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.620396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.620432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.620575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.620610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.620819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.620853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.621119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.621154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.884 [2024-11-20 08:27:36.621427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.884 [2024-11-20 08:27:36.621462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.884 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.621666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.621701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.621906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.621940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.622145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.622180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.622398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.622433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.622632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.622667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.622868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.622902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.623174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.623230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.623538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.623573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.623779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.623814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.624018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.624052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.624242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.624280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.624490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.624525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.624734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.624770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.624975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.625010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.625270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.625305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.625591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.625626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.625827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.625861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.626122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.626156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.626360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.626395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.626530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.626565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.626822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.626857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.627142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.627177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.627454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.627490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.627639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.627673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.628003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.628038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.628235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.628271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.628474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.628508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.628662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.628698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.628901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.628935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.629122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.629157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.629452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.629488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.629698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.885 [2024-11-20 08:27:36.629731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.885 qpair failed and we were unable to recover it. 00:30:22.885 [2024-11-20 08:27:36.629988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.630022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.630252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.630289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.630519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.630560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.630890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.630925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.631061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.631095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.631245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.631283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.631491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.631526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.631715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.631751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.631902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.631937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.632122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.632156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.632383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.632419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.632624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.632659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.632936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.632971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.633277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.633313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.633466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.633500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.633633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.633668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.633896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.633931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.634190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.634239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.634449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.634484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.634691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.634725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.635009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.635044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.635306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.635344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.635553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.635588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.635844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.635879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.636079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.636114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.636385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.636422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.636635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.636669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.636940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.636975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.637172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.637216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.637483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.637518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.637772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.637807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.638060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.638095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.638295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.638331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.638605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.638640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.638792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.638827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.639103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.639138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.639359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.639396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.886 [2024-11-20 08:27:36.639591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.886 [2024-11-20 08:27:36.639626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.886 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.639755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.639789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.639983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.640018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.640313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.640350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.640558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.640593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.640861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.640902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.641133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.641168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.641483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.641519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.641762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.641797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.642092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.642126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.642402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.642438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.642717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.642751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.643005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.643039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.643297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.643332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.643601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.643636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.643937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.643972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.644235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.644271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.644555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.644589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.644845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.644880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.645182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.645233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.645381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.645412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.645529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.645562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.645821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.645855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.646035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.646069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.646218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.646254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.646533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.646567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.646848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.646883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.647167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.647226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.647490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.647526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.647708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.647743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.647972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.648007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.648191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.648240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.648455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.648491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.648683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.648717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.649018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.649054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.649341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.649378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.649671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.649705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.649973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.650009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.887 [2024-11-20 08:27:36.650299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.887 [2024-11-20 08:27:36.650336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.887 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.650606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.650640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.650937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.650972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.651169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.651230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.651456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.651492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.651630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.651664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.651932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.651968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.652252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.652294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.652565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.652600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.652902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.652937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.653141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.653177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.653396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.653432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.653629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.653663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.653921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.653956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.654140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.654175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.654394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.654430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.654682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.654717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.654973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.655008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.655314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.655350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.655632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.655667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.655890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.655926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.656128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.656163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.656382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.656417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.656699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.656733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.657015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.657050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.657276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.657313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.657592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.657627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.657820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.657855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.658111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.658144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.658359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.658395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.658650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.658685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.658942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.658977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.659162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.659196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.659478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.659514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.659707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.659742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.659939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.659973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.660193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.660243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.660462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.660498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.888 qpair failed and we were unable to recover it. 00:30:22.888 [2024-11-20 08:27:36.660781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.888 [2024-11-20 08:27:36.660815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.660997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.661032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.661294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.661331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.661528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.661562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.661847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.661881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.662158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.662193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.662480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.662515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.662713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.662749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.662942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.662975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.663172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.663221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.663450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.663485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.663741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.663774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.664031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.664066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.664370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.664406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.664603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.664638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.664894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.664929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.665116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.665150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.665441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.665476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.665734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.665769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.666042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.666077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.666359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.666394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.666703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.666738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.667035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.667071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.667356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.667393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.667665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.667699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.667950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.667985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.668239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.668275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.668484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.668519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.668797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.668831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.669035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.669069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.669276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.669312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.669581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.669616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.669859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.669893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.670198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.670243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.670515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.670551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.670830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.670864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.671154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.671189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.671519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.889 [2024-11-20 08:27:36.671555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.889 qpair failed and we were unable to recover it. 00:30:22.889 [2024-11-20 08:27:36.671757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.671791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.671976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.672009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.672216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.672252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.672507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.672541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.672736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.672771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.672972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.673007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.673162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.673195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.673505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.673540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.673813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.673847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.674101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.674137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.674338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.674375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.674565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.674605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.674859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.674893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.675196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.675243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.675537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.675572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.675862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.675896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.676170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.676216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.676517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.676551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.676812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.676846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.677148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.677183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.677490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.677525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.677788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.677822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.678014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.678049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.678333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.678370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.678520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.678555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.678842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.678877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.679174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.679228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.679533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.679568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.679760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.679795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.680055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.680090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.680293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.680331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.680529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.680563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.680843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.680877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.890 [2024-11-20 08:27:36.681155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.890 [2024-11-20 08:27:36.681190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.890 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.681388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.681423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.681726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.681760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.681987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.682020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.682225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.682261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.682454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.682489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.682771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.682806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.683012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.683047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.683349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.683387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.683530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.683565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.683687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.683719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.683983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.684017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.684305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.684341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.684622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.684657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.684939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.684972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.685169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.685213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.685474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.685510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.685704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.685738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.685923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.685964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.686108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.686143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.686369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.686404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.686592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.686627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.686899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.686934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.687191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.687238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.687525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.687559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.687841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.687876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.687996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.688029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.688309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.688345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.688572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.688607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.688820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.688855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.689111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.689145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.689347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.689382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.689591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.689626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.689902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.689937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.690193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.690248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.690508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.690542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.690825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.690859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.691082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.691116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.891 qpair failed and we were unable to recover it. 00:30:22.891 [2024-11-20 08:27:36.691272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.891 [2024-11-20 08:27:36.691309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.691564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.691598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.691801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.691836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.692049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.692083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.692270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.692307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.692617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.692652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.692938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.692973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.693258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.693295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.693524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.693563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.693783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.693816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.694068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.694103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.694404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.694441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.694721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.694756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.695035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.695072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.695261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.695297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.695558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.695593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.695872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.695906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.696191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.696239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.696424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.696459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.696731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.696766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.697049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.697090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.697365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.697402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.697659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.697693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.697991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.698025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.698240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.698276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.698484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.698520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.698664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.698699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.698954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.698988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.699294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.699330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.699538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.699573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.699716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.699750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.699942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.699977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.700194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.700242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.700522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.700557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.700829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.700864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.701097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.701131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.701410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.701445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.701630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.892 [2024-11-20 08:27:36.701665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.892 qpair failed and we were unable to recover it. 00:30:22.892 [2024-11-20 08:27:36.701870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.701904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.702055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.702089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.702367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.702403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.702687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.702722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.702998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.703033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.703317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.703353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.703471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.703502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.703782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.703816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.704094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.704129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.704424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.704507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.704836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.704874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.705163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.705200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.705504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.705538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.705854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.705888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.706125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.706161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.706405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.706441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.706744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.706778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.707035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.707069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.707349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.707387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.707670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.707704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.707936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.707972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.708253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.708289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.708569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.708603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.708804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.708839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.709026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.709062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.709345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.709386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.709590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.709625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.709831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.709867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.710120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.710155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.710449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.710486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.710711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.710746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.710962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.710997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.711253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.711289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.711595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.711630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.711829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.711864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.712120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.712156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.712388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.712431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.712616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.712652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.893 [2024-11-20 08:27:36.712851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.893 [2024-11-20 08:27:36.712887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.893 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.713143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.713178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.713392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.713428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.713707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.713742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.714021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.714055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.714265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.714302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.714564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.714599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.714804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.714838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.715099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.715134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.715423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.715460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.715765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.715799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.716073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.716108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.716305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.716342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.716620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.716655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.716863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.716898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.717118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.717153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.717417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.717453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.717677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.717712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.717842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.717877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.718151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.718186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.718457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.718492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.718787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.718821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.719012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.719047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.719354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.719390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.719678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.719712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.719931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.719979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.720254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.720292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.720499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.720534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.720734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.720769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.721055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.721090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.721322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.721358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.721553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.721588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.721873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.721908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.722211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.722247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.722492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.894 [2024-11-20 08:27:36.722527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.894 qpair failed and we were unable to recover it. 00:30:22.894 [2024-11-20 08:27:36.722828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.722862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.723125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.723161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.723462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.723499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.723757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.723793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.724005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.724041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.724323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.724360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.724598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.724634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.724830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.724865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.725145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.725180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.725383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.725418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.725642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.725678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.725932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.725967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.726232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.726269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.726486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.726521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.726777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.726814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.727092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.727127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.727397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.727436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.727742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.727777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.728011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.728047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.728251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.728288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.728572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.728608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.728856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.728892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.729147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.729183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.729348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.729384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.729585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.729620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.729845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.729881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.730136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.730171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.730384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.730421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.730610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.730646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.730923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.730958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.731278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.731315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.731508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.731544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.731803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.731837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.732035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.732071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.732352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.732389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.732526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.732560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.732768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.732804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.733022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.895 [2024-11-20 08:27:36.733057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.895 qpair failed and we were unable to recover it. 00:30:22.895 [2024-11-20 08:27:36.733328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.733365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.733598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.733633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.733889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.733926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.734200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.734245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.734546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.734582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.734770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.734805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.735051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.735086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.735365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.735401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.735604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.735640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.735786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.735822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.736036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.736072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.736330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.736366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.736507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.736542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.736665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.736700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.736961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.736996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.737219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.737256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.737459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.737494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.737747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.737784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.738018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.738053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.738235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.738271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.738573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.738616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.738902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.738938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.739163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.739198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.739412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.739448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.739658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.739693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.739973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.740009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.740199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.740245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.740526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.740561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.740710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.740745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.741010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.741045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.741354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.741392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.741613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.741648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.741807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.741841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.742119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.742154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.742438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.742475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.742752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.742786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.743080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.743116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.743339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.896 [2024-11-20 08:27:36.743377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.896 qpair failed and we were unable to recover it. 00:30:22.896 [2024-11-20 08:27:36.743576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.743611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.743831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.743866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.744050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.744086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.744290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.744327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.744596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.744632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.744818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.744853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.745126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.745160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.745398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.745435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.745741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.745777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.746000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.746041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.746262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.746298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.746579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.746614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.746873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.746909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.747165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.747209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.747492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.747527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.747739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.747774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.747992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.748027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.748285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.748321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.748522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.748557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.748780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.748815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.749036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.749073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.749341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.749377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.749576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.749611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.749768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.749804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.749994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.750030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.750245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.750282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.750505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.750543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.750738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.750774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.751036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.751071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.751214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.751254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.751516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.751552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.751781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.751819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.752079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.752114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.752373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.752410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.752603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.752638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.752917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.752952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.753135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.753176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.753391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.753426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.897 qpair failed and we were unable to recover it. 00:30:22.897 [2024-11-20 08:27:36.753632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.897 [2024-11-20 08:27:36.753667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.753890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.753925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.754057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.754094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.754292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.754328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.754555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.754589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.754794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.754829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.755131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.755166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.755509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.755545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.755743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.755781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.756001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.756039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.756198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.756244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.756398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.756433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.756660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.756696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.756931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.756966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.757101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.757135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.757461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.757497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.757730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.757764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.758083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.758117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.758338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.758375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.758644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.758679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.758964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.758999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.759305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.759342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.759550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.759585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.759795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.759829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.760134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.760170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.760465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.760500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.760640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.760675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.760970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.761005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.761268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.761304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.761534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.761570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.761708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.761743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.762029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.762066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.762355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.762393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.762591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.762627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.762906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.762942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.763150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.763185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.763463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.763499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.763709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.763745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.898 [2024-11-20 08:27:36.763964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.898 [2024-11-20 08:27:36.763998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.898 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.764200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.764248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.764401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.764436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.764691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.764726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.765030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.765066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.765200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.765245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.765447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.765482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.765681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.765716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.765922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.765958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.766114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.766149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.766431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.766468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.766674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.766712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.767040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.767076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.767290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.767328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.767558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.767592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.767749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.767784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.767994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.768029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.768235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.768271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.768483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.768519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.768699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.768735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.768959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.768993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.769191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.769235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.769450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.769485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.769739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.769773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.769930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.769966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.770276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.770312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.770521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.770556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.770847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.770882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.771163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.771222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.771466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.771501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.771702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.771737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.772002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.772039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.772176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.772227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.772443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.772478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.899 [2024-11-20 08:27:36.772698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.899 [2024-11-20 08:27:36.772732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.899 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.772864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.772899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.773188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.773237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.773499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.773536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.773740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.773775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.774060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.774095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.774345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.774382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.774589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.774624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.774908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.774945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.775219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.775255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.775373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.775408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.775630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.775666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.775875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.775912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.776129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.776164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.776436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.776472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.776757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.776793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.777064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.777102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.777340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.777379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.777638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.777673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.777943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.777978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.778131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.778165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.778418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.778461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.778719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.778754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.779017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.779051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.779267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.779304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.779585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.779620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.779840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.779876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.780085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.780121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.780380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.780417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.780697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.780732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.781009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.781044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.781251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.781288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.781495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.781535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.781768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.781804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.782084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.782120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.782345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.782382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.782522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.782556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.782715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.782749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.783047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.900 [2024-11-20 08:27:36.783083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.900 qpair failed and we were unable to recover it. 00:30:22.900 [2024-11-20 08:27:36.783371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.783408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.783683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.783718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.783920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.783956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.784242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.784282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.784535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.784572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.784802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.784837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.785118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.785153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.785428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.785464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.785691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.785726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.785950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.785984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.786246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.786283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.786501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.786536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.786728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.786763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.786997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.787031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.787286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.787322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.787523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.787558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.787862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.787897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.788179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.788226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.788461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.788497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.788753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.788789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.788923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.788957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.789072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.789108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.789330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.789366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.789619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.789655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.789963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.789998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.790281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.790318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.790462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.790497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.790759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.790793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.791070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.791106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.791333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.791370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.791633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.791669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.791855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.791890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.792093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.792129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.792386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.792423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.792606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.792641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.792897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.792934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.793142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.793178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.793491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.901 [2024-11-20 08:27:36.793529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.901 qpair failed and we were unable to recover it. 00:30:22.901 [2024-11-20 08:27:36.793740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.793775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.794031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.794066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.794251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.794288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.794423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.794457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.794667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.794702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.794931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.794966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.795191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.795240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.795499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.795533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.795736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.795770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.796066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.796102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.796244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.796282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.796565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.796600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.796858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.796900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.797133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.797168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.797423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.797459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.797606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.797641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.797775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.797809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.798091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.798126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.798393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.798429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.798715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.798749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.799010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.799045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.799356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.799392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.799616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.799651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.799879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.799913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.800107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.800143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.800302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.800339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.800536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.800571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.800825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.800860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.801113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.801148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.801399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.801435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.801645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.801680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.801898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.801932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.802132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.802167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.802474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.802512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.802708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.802743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.802965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.803000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.803306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.803342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.803555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.803589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.902 [2024-11-20 08:27:36.803863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.902 [2024-11-20 08:27:36.803898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.902 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.804132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.804173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.804420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.804456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.804595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.804629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.804938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.804972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.805155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.805190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.805390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.805426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.805580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.805615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.805869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.805905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.806176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.806236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.806398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.806434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.806641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.806677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.806833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.806867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.807081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.807116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.807366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.807404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.807623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.807658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.807953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.807988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.808259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.808297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.808521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.808557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.808760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.808795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.809003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.809039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.809341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.809377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.809517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.809553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.809777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.809813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.810046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.810081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.810315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.810351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.810495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.810529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.810728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.810764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.810998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.811038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.811245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.811281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.811470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.811506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.811738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.811774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.811961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.811997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.812148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.812183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.812336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.812372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.903 [2024-11-20 08:27:36.812649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.903 [2024-11-20 08:27:36.812684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.903 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.812932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.812968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.813225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.813262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.813462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.813497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.813796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.813831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.814050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.814085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.814277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.814313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.814525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.814562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.814787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.814822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.815008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.815044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.815347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.815384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.815569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.815604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.815820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.815855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.816076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.816112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.816420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.816456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.816711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.816747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.816999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.817034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.817169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.817215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.817476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.817510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.817797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.817831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.818111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.818147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.818389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.818426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.818732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.818768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.819048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.819082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.819290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.819327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.819583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.819618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.819749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.819785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.819982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.820017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.820274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.820310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.820530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.820564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.820714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.820748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.820981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.821016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.821225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.821261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.821394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.821429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.821702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.821743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.822027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.822062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.822252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.904 [2024-11-20 08:27:36.822289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.904 qpair failed and we were unable to recover it. 00:30:22.904 [2024-11-20 08:27:36.822445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.822479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.822681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.822716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.822927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.822962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.823268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.823305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.823447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.823483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.823695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.823730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.823943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.823979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.824171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.824216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.824419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.824454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.824661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.824696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.825011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.825046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.825254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.825291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.825565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.825600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.825810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.825845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.826128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.826164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.826361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.826400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.826586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.826621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.826874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.826908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.827107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.827142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.827397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.827434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.827558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.827593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.827795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.827829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.828107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.828144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.828376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.828413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.828670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.828717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.828952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.828987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.829311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.829349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.829536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.829570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.829794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.829829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.830014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.830049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.830327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.830363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.830576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.830612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.830891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.830927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.831129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.831164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.832880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.832942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.833181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.833234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.833470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.833505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.833764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.833799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.905 qpair failed and we were unable to recover it. 00:30:22.905 [2024-11-20 08:27:36.834034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.905 [2024-11-20 08:27:36.834070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.834285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.834321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.834546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.834581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.834848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.834883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.835088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.835122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.835382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.835419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.835573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.835608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.835899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.835934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.836229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.836264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.836422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.836461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.836668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.836700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.836905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.836940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.837236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.837273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.837436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.837478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.837683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.837717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.838023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.838057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.838258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.838295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.838493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.838527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.838713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.838747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.838952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.838987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.839184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.839230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.839508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.839543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.839844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.839878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.840102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.840136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.840392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.840428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.840640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.840676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.840901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.840935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.841081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.841116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.841389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.841426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.841618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.841653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.841841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.841876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.842098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.842132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.842334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.842372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.842529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.842563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.842819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.842854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.843062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.843097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.843384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.843421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.843559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.843595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.906 qpair failed and we were unable to recover it. 00:30:22.906 [2024-11-20 08:27:36.843878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.906 [2024-11-20 08:27:36.843914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.844185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.844231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.844492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.844528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.844722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.844758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.845040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.845076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.845285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.845323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.845463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.845497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.845695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.845730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.846034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.846068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.846184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.846248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.846367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.846402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.846655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.846690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.846923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.846958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.847235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.847272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.847475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.847511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.847643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.847678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.847882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.847917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.848137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.848172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.848413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.848448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.848715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.848750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.849048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.849083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.849287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.849323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.849544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.849579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.849785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.849820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.850026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.850061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.850282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.850318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.850522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.850557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.850718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.850753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.851052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.851088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.851379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.851417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.851610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.851645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.851854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.851889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.852096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.852131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.852440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.852476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.852735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.852769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.852960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.852996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.853261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.853298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.853589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.853625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.907 [2024-11-20 08:27:36.853760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.907 [2024-11-20 08:27:36.853795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.907 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.853978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.854012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.854193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.854251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.854531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.854567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.854879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.854914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.855110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.855150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.855419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.855458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.855606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.855641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.855783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.855818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.856107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.856142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.856425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.856460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.856662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.856696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.856977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.857012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.857287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.857325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.857618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.857653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.857961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.857997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.858198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.858255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.858406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.858443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.858701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.858736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.859045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.859080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.859285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.859322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.859471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.859505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.859725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.859760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.859918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.859954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.860242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.860278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.860432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.860467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.860676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.860711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.860918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.860953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.861154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.861190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.861510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.861547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.861740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.861776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.862056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.862092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.862291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.862333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.862544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.862580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.908 [2024-11-20 08:27:36.862802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.908 [2024-11-20 08:27:36.862836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.908 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.863040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.863075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.863352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.863388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.863647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.863683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.863925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.863960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.864269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.864306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.864564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.864600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.864816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.864851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.865130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.865165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.865379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.865415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.865566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.865600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.865808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.865844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.866154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.866190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.866360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.866395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.866545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.866580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.866741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.866776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.867053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.867088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.867282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.867319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.867576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.867611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.867743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.867778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.868034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.868069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.868295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.868331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.868601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.868636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.868758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.868793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.869073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.869107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.869374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.869417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.869608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.869643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.869914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.869949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.870244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.870281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.870534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.870569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.870865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.870900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.871196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.871246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.871445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.871480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.871762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.871797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.872077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.872111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.872392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.872429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.872674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.872708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.872933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.872968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.873168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.909 [2024-11-20 08:27:36.873212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.909 qpair failed and we were unable to recover it. 00:30:22.909 [2024-11-20 08:27:36.873364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.873399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.873659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.873694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.874020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.874054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.874285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.874321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.874558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.874593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.874856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.874890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.875185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.875248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.875531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.875566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.875777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.875811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.876064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.876099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.876301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.876339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.876609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.876643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.876796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.876830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.877047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.877083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.877370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.877406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.877553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.877588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.877894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.877928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.878140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.878175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.878475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.878511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.878650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.878685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.878981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.879015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.879276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.879313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.879518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.879552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.879784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.879820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.880105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.880141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.880301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.880338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.880507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.880541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.880742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.880784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.881106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.881141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.881317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.881354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.881537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.881573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.881853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.881888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.882236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.882274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.882552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.882587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.882712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.882747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.883027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.883062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.883332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.883368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.883556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.910 [2024-11-20 08:27:36.883592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.910 qpair failed and we were unable to recover it. 00:30:22.910 [2024-11-20 08:27:36.883864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.911 [2024-11-20 08:27:36.883899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.911 qpair failed and we were unable to recover it. 00:30:22.911 [2024-11-20 08:27:36.884058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.911 [2024-11-20 08:27:36.884093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.911 qpair failed and we were unable to recover it. 00:30:22.911 [2024-11-20 08:27:36.884299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.911 [2024-11-20 08:27:36.884335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.911 qpair failed and we were unable to recover it. 00:30:22.911 [2024-11-20 08:27:36.884486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.911 [2024-11-20 08:27:36.884521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.911 qpair failed and we were unable to recover it. 00:30:22.911 [2024-11-20 08:27:36.884750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.911 [2024-11-20 08:27:36.884784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.911 qpair failed and we were unable to recover it. 00:30:22.911 [2024-11-20 08:27:36.885052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.911 [2024-11-20 08:27:36.885087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.911 qpair failed and we were unable to recover it. 00:30:22.911 [2024-11-20 08:27:36.885270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.911 [2024-11-20 08:27:36.885307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.911 qpair failed and we were unable to recover it. 00:30:22.911 [2024-11-20 08:27:36.885459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.911 [2024-11-20 08:27:36.885493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.911 qpair failed and we were unable to recover it. 00:30:22.911 [2024-11-20 08:27:36.885708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.911 [2024-11-20 08:27:36.885744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.911 qpair failed and we were unable to recover it. 00:30:22.911 [2024-11-20 08:27:36.885878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:22.911 [2024-11-20 08:27:36.885914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:22.911 qpair failed and we were unable to recover it. 00:30:23.217 [2024-11-20 08:27:36.886171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.217 [2024-11-20 08:27:36.886221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.217 qpair failed and we were unable to recover it. 00:30:23.217 [2024-11-20 08:27:36.886424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.217 [2024-11-20 08:27:36.886461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.217 qpair failed and we were unable to recover it. 00:30:23.217 [2024-11-20 08:27:36.886757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.217 [2024-11-20 08:27:36.886793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.217 qpair failed and we were unable to recover it. 00:30:23.217 [2024-11-20 08:27:36.887001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.217 [2024-11-20 08:27:36.887035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.217 qpair failed and we were unable to recover it. 00:30:23.217 [2024-11-20 08:27:36.887292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.217 [2024-11-20 08:27:36.887330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.217 qpair failed and we were unable to recover it. 00:30:23.217 [2024-11-20 08:27:36.887605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.217 [2024-11-20 08:27:36.887641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.217 qpair failed and we were unable to recover it. 00:30:23.217 [2024-11-20 08:27:36.887929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.217 [2024-11-20 08:27:36.887970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.217 qpair failed and we were unable to recover it. 00:30:23.217 [2024-11-20 08:27:36.888166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.217 [2024-11-20 08:27:36.888213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.217 qpair failed and we were unable to recover it. 00:30:23.217 [2024-11-20 08:27:36.888365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.217 [2024-11-20 08:27:36.888400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.217 qpair failed and we were unable to recover it. 00:30:23.217 [2024-11-20 08:27:36.888601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.217 [2024-11-20 08:27:36.888636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.217 qpair failed and we were unable to recover it. 00:30:23.217 [2024-11-20 08:27:36.888904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.217 [2024-11-20 08:27:36.888940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.217 qpair failed and we were unable to recover it. 00:30:23.217 [2024-11-20 08:27:36.889096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.889130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.889418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.889454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.889675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.889709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.889893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.889928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.890112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.890147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.890392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.890429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.890592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.890627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.890788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.890824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.891047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.891081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.891241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.891278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.891411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.891446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.891701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.891736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.891924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.891959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.892079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.892114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.892260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.892296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.892444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.892479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.892620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.892656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.892874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.892908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.893128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.893163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.893411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.893448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.893657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.893692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.893940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.893975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.894171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.894223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.894479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.894514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.894787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.894822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.894984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.895021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.895298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.895336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.895534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.895570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.895778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.895814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.895962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.895998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.896197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.896244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.896458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.896494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.896653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.896689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.896908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.896943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.897237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.897273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.897471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.897507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.897710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.897746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.897957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.218 [2024-11-20 08:27:36.897992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.218 qpair failed and we were unable to recover it. 00:30:23.218 [2024-11-20 08:27:36.898176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.898223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.898435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.898471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.898773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.898808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.899024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.899059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.899260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.899297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.899555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.899590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.899735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.899770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.900026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.900061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.900331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.900368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.904232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.904298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.904633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.904673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.904938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.904975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.905275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.905315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.905645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.905684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.905916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.905953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.906169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.906221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.906384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.906419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.906626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.906662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.906947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.906986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.907254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.907293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.907511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.907547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.907753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.907788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.908056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.908093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.908242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.908279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.908423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.908459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.908727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.908808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.909129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.909168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.909384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.909421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.909580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.909615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.909925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.909959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.910227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.910264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.910530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.910566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.910725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.910761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.911010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.911043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.911184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.911232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.911433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.911468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.911663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.911697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.911936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.219 [2024-11-20 08:27:36.911971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.219 qpair failed and we were unable to recover it. 00:30:23.219 [2024-11-20 08:27:36.912259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.912306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.912561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.912595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.912900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.912936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.913148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.913183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.914771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.914834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.915150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.915217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.915472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.915500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.915763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.915788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.916092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.916117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.916314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.916341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.916524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.916549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.916766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.916792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.917060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.917090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.917293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.917321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.917468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.917493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.917621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.917646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.917917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.917943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.918179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.918213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.918382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.918407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.918669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.918695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.919019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.919045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.919262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.919290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.919497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.919524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.919782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.919803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.921178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.921230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.921502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.921522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.921748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.921768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.921971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.921991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.922116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.922135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.922340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.922362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.922483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.922502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.922607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.922624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.922790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.922809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.922999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.923018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.923188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.923217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.923375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.923394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.923497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.923515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.923734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.923753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.220 qpair failed and we were unable to recover it. 00:30:23.220 [2024-11-20 08:27:36.923954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.220 [2024-11-20 08:27:36.923973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.924146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.924165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.924335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.924355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.924605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.924625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.924740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.924756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.925022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.925041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.925278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.925298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.925403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.925420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.925652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.925672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.925839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.925858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.926039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.926059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.926183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.926213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.926420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.926439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.926550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.926567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.926675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.926695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.926865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.926884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.926976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.926993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.927209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.927229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.927409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.927428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.927601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.927620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.927768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.927787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.928038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.928057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.928229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.928249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.928487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.928506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.928605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.928622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.928727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.928743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.928858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.928877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.928979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.928995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.929169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.929188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.929394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.929414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.929612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.929634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.929798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.929820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.930087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.930108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.930281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.930302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.930409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.930429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.930581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.930602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.930703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.930722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.930844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.930865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.221 [2024-11-20 08:27:36.931033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.221 [2024-11-20 08:27:36.931053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.221 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.931212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.931233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.931326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.931345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.931435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.931453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.931608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.931628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.931792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.931812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.932068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.932089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.932191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.932219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.932385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.932405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.932530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.932550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.932720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.932740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.932918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.932940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.933214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.933236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.933405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.933425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.934514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.934553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.934845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.934867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.935040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.935059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.935260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.935282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.935498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.935518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.935638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.935664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.935857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.935877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.935999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.936018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.936187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.936217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.936446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.936467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.936575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.936593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.936717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.936735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.936919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.936940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.937120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.937141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.937316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.937337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.937449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.937469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.937574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.937593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.938670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.938708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.938971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.938993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.222 qpair failed and we were unable to recover it. 00:30:23.222 [2024-11-20 08:27:36.939253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.222 [2024-11-20 08:27:36.939275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.939527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.939548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.939770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.939795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.939901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.939924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.940153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.940176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.940304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.940328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.940506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.940529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.940646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.940668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.940954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.940977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.941230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.941255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.941434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.941457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.941690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.941714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.941980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.942004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.942176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.942214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.942332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.942357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.942460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.942482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.942605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.942627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.942896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.942920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.943039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.943062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.943251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.943276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.943462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.943485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.943612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.943636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.943820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.943844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.944149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.944172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.944343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.944367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.944536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.944560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.944696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.944721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.944960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.944984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.945165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.945189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.945429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.945453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.945571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.945592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.945890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.945918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.946158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.946182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.946317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.946341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.946452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.946474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.946692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.946716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.946940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.946963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.947142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.947165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.223 [2024-11-20 08:27:36.947429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.223 [2024-11-20 08:27:36.947454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.223 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.947589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.947613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.947827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.947851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.948042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.948070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.948240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.948264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.948455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.948480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.948788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.948812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.948984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.949007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.949264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.949288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.949411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.949438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.949675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.949700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.949899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.949923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.950126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.950155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.950351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.950382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.950507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.950537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.950741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.950772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.951024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.951055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.951173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.951213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.951458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.951488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.951621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.951651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.951868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.951897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.952014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.952041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.952273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.952305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.952553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.952583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.952719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.952750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.953015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.953046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.953226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.953258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.953400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.953430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.953625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.953656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.953905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.953936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.954159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.954189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.954393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.954424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.954557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.954587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.954812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.954862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.955060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.955087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.955284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.955311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.955414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.955435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.955660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.955682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.224 qpair failed and we were unable to recover it. 00:30:23.224 [2024-11-20 08:27:36.955967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.224 [2024-11-20 08:27:36.955984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.956174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.956191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.956347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.956364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.956470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.956486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.956655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.956671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.956844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.956862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.957033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.957050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.957145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.957164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.957342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.957367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.957491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.957514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.957631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.957656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.957851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.957875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.958051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.958073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.958336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.958354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.958454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.958469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.958644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.958661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.958757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.958772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.959074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.959091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.959176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.959196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.959332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.959349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.959450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.959465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.959632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.959670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.959893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.959923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.960121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.960152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.960376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.960406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.960542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.960571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.960709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.960741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.960870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.960900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.961012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.961041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.961356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.961389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.961640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.961670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.962017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.962050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.962327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.962359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.962501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.962530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.962706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.962736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.962945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.962976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.963170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.963209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.225 [2024-11-20 08:27:36.963399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.225 [2024-11-20 08:27:36.963429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.225 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.963554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.963584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.963759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.963788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.963987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.964019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.964273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.964306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.964500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.964531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.964730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.964760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.964943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.964973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.965155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.965188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.965379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.965409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.965601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.965631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.965845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.965875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.965989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.966018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.966213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.966247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.966493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.966524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.966645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.966673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.966956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.966988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.967102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.967129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.967337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.967371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.967568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.967599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.967736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.967767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.967895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.967932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.968199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.968239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.968503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.968535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.968667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.968696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.968976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.969007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.969320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.969352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.969477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.969500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.969629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.969649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.969841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.969857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.969963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.969977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.970122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.970137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.970307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.970323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.970429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.970443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.226 [2024-11-20 08:27:36.970584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.226 [2024-11-20 08:27:36.970599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.226 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.970858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.970874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.971120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.971141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.971335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.971358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.971551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.971574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.971762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.971785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.971987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.972004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.972252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.972270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.972436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.972451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.972554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.972567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.972671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.972687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.972879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.972895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.973122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.973139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.973346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.973362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.973453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.973470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.973621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.973643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.973730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.973748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.973928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.973951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.974061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.974080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.974248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.974270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.974382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.974398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.974555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.974571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.974653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.974667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.974757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.974770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.975043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.975058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.975217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.975233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.975385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.975400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.975569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.975584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.975703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.975719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.975871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.975891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.976117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.976139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.976368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.976392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.976554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.976578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.976755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.976772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.976865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.976879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.977101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.977117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.977225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.977239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.977431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.227 [2024-11-20 08:27:36.977446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.227 qpair failed and we were unable to recover it. 00:30:23.227 [2024-11-20 08:27:36.977549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.977562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.977748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.977763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.977932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.977947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.978097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.978112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.978370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.978395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.978579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.978602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.978716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.978737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.978943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.978963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.979125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.979142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.979359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.979376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.979518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.979534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.979645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.979662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.979903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.979918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.980034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.980052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.980242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.980263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.980365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.980382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.980492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.980524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.980790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.980817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.981129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.981159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.981375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.981403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.981579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.981598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.981770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.981789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.982023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.982041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.982210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.982231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.982393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.982411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.982603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.982622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.982772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.982791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.983120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.983150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.983348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.983377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.983551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.983579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.983702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.983729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.983934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.983957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.984124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.984144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.984251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.984269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.984364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.984381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.984493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.984512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.984773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.984791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.228 [2024-11-20 08:27:36.984883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.228 [2024-11-20 08:27:36.984901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.228 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.985072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.985091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.985251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.985276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.985398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.985424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.985658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.985686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.985952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.985979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.986113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.986140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.986263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.986293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.986402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.986420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.986576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.986594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.986705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.986724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.986886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.986905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.987056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.987075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.987232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.987252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.987334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.987352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.987593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.987613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.987705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.987726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.987908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.987935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.988223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.988252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.988457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.988489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.988670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.988700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.988868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.988888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.989057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.989076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.989159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.989176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.989266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.989284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.989431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.989449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.989691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.989709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.989798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.989814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.989995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.990013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.990216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.990254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.993382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.993431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.993595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.993629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.993907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.993943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.994096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.994129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.994271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.994308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.994470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.994504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.994763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.994799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.994950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.994987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.229 [2024-11-20 08:27:36.995183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.229 [2024-11-20 08:27:36.995247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.229 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.995514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.995554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.995695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.995731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.995856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.995892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.996024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.996055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.996240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.996281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.996504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.996543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.996828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.996863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.997077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.997114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.997264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.997299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.997507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.997541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.997670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.997705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.997834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.997870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.997996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.998031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.998168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.998215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.998352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.998386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.998523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.998557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.998759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.998794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.998917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.998952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.999136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.999172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.999369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.999408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.999607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.999650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.999807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:36.999841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:36.999970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:37.000003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:37.000134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:37.000169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:37.000375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:37.000454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:37.000690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:37.000729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:37.000877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:37.000913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:37.001167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:37.001214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:37.001417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:37.001451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:37.001573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:37.001605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:37.001797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:37.001831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:37.001972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:37.002005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:37.002116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:37.002150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:37.002353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:37.002385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:37.002479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:37.002496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:37.002717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:37.002734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:37.002877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.230 [2024-11-20 08:27:37.002895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.230 qpair failed and we were unable to recover it. 00:30:23.230 [2024-11-20 08:27:37.002975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.002990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.003151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.003178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.003370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.003397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.003495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.003518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.003609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.003633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.003746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.003768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.003884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.003909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.004031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.004053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.004137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.004154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.004255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.004273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.004432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.004451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.004592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.004610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.004772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.004790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.004878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.004894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.004990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.005006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.005080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.005096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.005180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.005196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.005306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.005337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.005484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.005500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.005577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.005594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.005681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.005704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.005821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.005844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.006043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.006068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.006245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.006279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.006471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.006496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.006664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.006686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.006851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.006869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.007038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.007056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.007148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.007165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.007310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.007339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.007416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.007433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.007529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.007546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.007651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.231 [2024-11-20 08:27:37.007667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.231 qpair failed and we were unable to recover it. 00:30:23.231 [2024-11-20 08:27:37.007751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.007767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.007926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.007943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.008032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.008047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.008122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.008138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.008259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.008285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.008463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.008489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.008594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.008615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.008817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.008843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.008959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.008982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.009083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.009105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.009197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.009222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.009307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.009322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.009502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.009520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.009616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.009632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.009706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.009721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.009806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.009822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.009908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.009924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.010101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.010121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.010272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.010294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.010376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.010394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.010475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.010494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.010580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.010600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.010774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.010804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.010915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.010946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.011078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.011108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.011350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.011381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.011596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.011626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.011802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.011831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.011957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.011989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.012171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.012208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.012393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.012430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.012624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.012656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.012781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.012808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.013053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.013086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.013235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.013266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.013387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.013416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.013594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.013623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.013865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.232 [2024-11-20 08:27:37.013899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.232 qpair failed and we were unable to recover it. 00:30:23.232 [2024-11-20 08:27:37.014031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.014061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.014237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.014269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.014385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.014415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.014528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.014559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.014724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.014754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.014995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.015026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.015156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.015186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.015412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.015442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.015553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.015583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.015701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.015731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.015940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.015968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.016079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.016109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.016378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.016409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.016663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.016695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.016799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.016827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.016937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.016966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.017162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.017191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.017382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.017415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.017543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.017573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.017697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.017727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.017841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.017870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.018082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.018113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.018236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.018267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.018404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.018433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.018627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.018659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.018917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.018948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.019056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.019088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.019364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.019397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.019648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.019680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.019806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.019836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.020007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.020036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.020300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.020334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.020467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.020496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.020595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.020617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.020799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.020822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.020919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.020941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.021131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.021151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.233 [2024-11-20 08:27:37.021280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.233 [2024-11-20 08:27:37.021296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.233 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.021383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.021398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.021486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.021501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.021655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.021671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.021772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.021789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.021876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.021890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.022077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.022093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.022181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.022195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.022351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.022367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.022465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.022481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.022640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.022656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.022893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.022917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.023021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.023040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.023136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.023158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.023329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.023353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.023450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.023472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.023649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.023674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.023848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.023866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.024008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.024024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.024168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.024183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.024284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.024299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.024537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.024553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.024715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.024731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.024809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.024823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.024891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.024905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.025057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.025073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.025247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.025268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.025364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.025385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.025498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.025519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.025634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.025656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.025753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.025776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.025889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.025911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.026079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.026101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.026191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.026212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.026303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.026317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.026468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.026489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.026564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.026578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.026659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.026673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.234 qpair failed and we were unable to recover it. 00:30:23.234 [2024-11-20 08:27:37.026758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.234 [2024-11-20 08:27:37.026773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.026915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.026930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.027021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.027035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.027220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.027237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.027329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.027344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.027487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.027503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.027598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.027613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.027698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.027712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.027861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.027883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.027972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.027991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.028162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.028186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.028399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.028423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.028588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.028609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.028758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.028773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.028845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.028859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.028941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.028955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.029130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.029145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.029231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.029246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.029402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.029418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.029495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.029509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.029664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.029680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.029750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.029764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.029839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.029853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.029936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.029951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.030022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.030036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.030189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.030217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.030305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.030324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.030488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.030517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.030621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.030650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.030761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.030789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.030973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.031002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.031180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.031218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.031331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.031358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.031481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.031509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.031714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.031743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.031857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.031885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.031979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.032004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.235 [2024-11-20 08:27:37.032114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.235 [2024-11-20 08:27:37.032147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.235 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.032255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.032284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.032459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.032490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.032602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.032630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.032816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.032845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.033017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.033047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.033151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.033179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.033312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.033339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.033456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.033485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.033654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.033684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.033786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.033814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.033908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.033936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.034139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.034169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.034291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.034321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.034501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.034530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.034711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.034742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.034848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.034876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.034992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.035020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.035180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.035216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.035334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.035362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.035530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.035559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.035730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.035758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.035996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.036026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.036221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.036251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.036363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.036391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.036577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.036606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.036703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.036730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.036845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.036874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.037040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.037070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.037186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.037223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.037328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.037355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.037592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.037621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.037730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.037758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.037872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.236 [2024-11-20 08:27:37.037899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.236 qpair failed and we were unable to recover it. 00:30:23.236 [2024-11-20 08:27:37.038104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.038135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.038268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.038298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.038469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.038497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.038695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.038724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.038907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.038935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.039118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.039149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.039314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.039349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.039443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.039471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.039580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.039608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.039726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.039754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.039940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.039968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.040153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.040181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.040377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.040400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.040574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.040588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.040723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.040738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.040807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.040819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.040901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.040915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.040998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.041012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.041216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.041230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.041310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.041326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.041407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.041421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.041557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.041570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.041719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.041733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.041817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.041831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.041926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.041940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.042028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.042049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.042258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.042280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.042368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.042388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.042541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.042561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.042732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.042753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.042844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.042864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.042942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.042962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.043182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.043199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.043296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.043311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.043408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.043422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.043584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.043598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.043678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.043692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.237 [2024-11-20 08:27:37.043778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.237 [2024-11-20 08:27:37.043792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.237 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.043933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.043947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.044122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.044136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.044212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.044225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.044290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.044303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.044451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.044465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.044540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.044553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.044621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.044633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.044700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.044719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.044808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.044832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.044934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.044954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.045049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.045068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.045147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.045166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.045254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.045275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.045374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.045393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.045472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.045492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.045588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.045605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.045674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.045686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.045822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.045837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.045944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.045957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.046051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.046065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.046223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.046238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.046336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.046350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.046430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.046444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.046517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.046529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.046629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.046643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.046714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.046726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.046808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.046821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.046960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.046974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.047150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.047169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.047255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.047276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.047376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.047396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.047476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.047496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.047660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.047681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.047830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.047851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.047932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.047948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.048025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.048040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.048126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.048140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.238 [2024-11-20 08:27:37.048231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.238 [2024-11-20 08:27:37.048246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.238 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.048393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.048407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.048548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.048562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.048731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.048746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.048823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.048837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.048927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.048941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.049018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.049030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.049117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.049131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.049200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.049217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.049294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.049308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.049388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.049402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.049505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.049531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.049626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.049646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.049724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.049744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.049820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.049840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.049931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.049951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.050038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.050057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.050144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.050162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.050274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.050294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.050391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.050407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.050497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.050513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.050583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.050598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.050683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.050699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.050905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.050921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.051132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.051148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.051227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.051243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.051330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.051346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.051414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.051429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.051651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.051667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.051748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.051764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.051838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.051859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.051959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.051981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.052135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.052159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.052255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.052279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.052378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.052401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.052641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.052664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.052763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.052782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.052867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.239 [2024-11-20 08:27:37.052883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.239 qpair failed and we were unable to recover it. 00:30:23.239 [2024-11-20 08:27:37.053024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.053040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.053113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.053126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.053274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.053291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.053372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.053389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.053481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.053498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.053566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.053581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.053744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.053760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.053847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.053864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.054012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.054028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.054165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.054181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.054279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.054301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.054470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.054501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.054656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.054680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.054862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.054889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.054985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.055007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.055164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.055185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.055274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.055291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.055381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.055397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.055488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.055504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.055573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.055589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.055657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.055675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.055749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.055766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.055947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.055963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.056144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.056160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.056313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.056331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.056440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.056456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.056621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.056637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.056802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.056827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.056914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.056936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.057103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.057125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.057229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.057254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.057420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.057443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.057601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.057626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.057717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.240 [2024-11-20 08:27:37.057736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.240 qpair failed and we were unable to recover it. 00:30:23.240 [2024-11-20 08:27:37.057823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.057840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.057925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.057941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.058080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.058096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.058248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.058265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.058356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.058372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.058466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.058481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.058589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.058603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.058819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.058833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.058904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.058917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.059014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.059027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.059109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.059123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.059275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.059307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.059393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.059411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.059496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.059515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.059600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.059618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.059700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.059717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.059875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.059894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.060082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.060099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.060255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.060274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.060381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.060402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.060487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.060504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.060588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.060605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.060699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.060717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.060794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.060813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.060905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.060924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.061073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.061092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.061187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.061225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.061377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.061397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.061492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.061511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.061625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.061644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.061749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.061776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.061953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.061979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.062093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.062118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.062224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.062252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.062419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.062445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.062550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.062576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.062771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.062800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.241 [2024-11-20 08:27:37.063012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.241 [2024-11-20 08:27:37.063041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.241 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.063160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.063188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.063374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.063404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.063656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.063686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.063808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.063837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.064005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.064036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.064220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.064250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.064359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.064386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.064555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.064583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.064759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.064796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.064981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.065010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.065126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.065156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.065266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.065298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.065462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.065491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.065605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.065633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.065816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.065845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.066029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.066058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.066247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.066277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.066384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.066414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.066574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.066595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.066693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.066712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.066801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.066821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.066978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.066998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.067079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.067100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.067357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.067378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.067470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.067490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.067587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.067607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.067697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.067717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.067795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.067815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.067896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.067916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.068010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.068031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.068214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.068244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.068414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.068443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.068566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.068595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.068780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.068809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.068934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.068962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.069066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.069094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.069257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.069287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.242 [2024-11-20 08:27:37.069403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.242 [2024-11-20 08:27:37.069425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.242 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.069507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.069527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.069674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.069694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.069834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.069854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.069933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.069953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.070033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.070053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.070132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.070151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.070240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.070261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.070358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.070378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.070468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.070488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.070572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.070592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.070669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.070697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.070777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.070799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.070947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.070972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.071072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.071103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.071227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.071258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.071382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.071413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.071598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.071629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.071843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.071875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.071997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.072028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.072218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.072251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.072362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.072387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.072609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.072632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.072711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.072733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.072944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.073020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.073266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.073306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.073575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.073610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.073792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.073825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.073993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.074025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.074147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.074180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.074369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.074403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.074583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.074616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.074811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.074843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.074948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.074982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.075107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.075140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.075270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.075305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.075453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.075487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.075660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.243 [2024-11-20 08:27:37.075693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.243 qpair failed and we were unable to recover it. 00:30:23.243 [2024-11-20 08:27:37.075823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.075857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.075964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.075998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.076113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.076145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.076269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.076304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.076477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.076511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.076618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.076651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.076901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.076934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.077091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.077124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.077337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.077373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.077480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.077514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.077617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.077650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.077791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.077823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.077946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.077980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.078164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.078210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.078338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.078372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.078501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.078534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.078662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.078694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.078815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.078848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.078958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.078992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.079170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.079210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.079344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.079377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.079496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.079529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.079700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.079734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.079859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.079891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.080010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.080043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.080166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.080199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.080327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.080360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.080647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.080683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.080855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.080889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.081021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.081055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.081232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.081266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.081373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.081406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.081514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.081547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.081738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.081770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.081913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.081946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.082072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.082104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.082220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.082255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.082452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.244 [2024-11-20 08:27:37.082485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.244 qpair failed and we were unable to recover it. 00:30:23.244 [2024-11-20 08:27:37.082595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.082628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.082760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.082792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.082916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.082950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.083060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.083093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.083221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.083255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.083382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.083414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.083603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.083636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.083779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.083812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.083920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.083953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.084075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.084108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.084282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.084316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.084525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.084558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.084697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.084731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.084862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.084894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.085006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.085040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.085166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.085220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.085330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.085363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.085473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.085506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.085621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.085654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.085779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.085813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.085923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.085956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.086128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.086161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.086284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.086318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.086435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.086468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.086587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.086637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.086747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.086779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.086964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.086997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.087242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.087276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.087414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.087447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.087638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.087672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.087809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.087843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.088015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.088048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.088228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.088263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.088506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.245 [2024-11-20 08:27:37.088539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.245 qpair failed and we were unable to recover it. 00:30:23.245 [2024-11-20 08:27:37.088648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.088681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.088798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.088832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.088950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.088983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.089120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.089152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.089355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.089390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.089506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.089538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.089655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.089689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.089811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.089844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.089957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.089992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.090113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.090146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.090329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.090363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.090536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.090568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.090686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.090720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.090837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.090870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.091132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.091165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.091318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.091352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.091469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.091504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.091711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.091743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.091854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.091887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.092093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.092128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.092254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.092289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.092426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.092465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.092602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.092636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.092756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.092788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.092972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.093006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.093273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.093308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.093419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.093450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.093663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.093696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.093916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.093949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.094134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.094167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.094308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.094343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.094492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.094524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.094715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.094749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.094933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.094965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.095143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.095164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.095344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.095365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.095547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.095569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.246 [2024-11-20 08:27:37.095665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.246 [2024-11-20 08:27:37.095684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.246 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.095824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.095843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.096010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.096030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.096126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.096144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.096335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.096356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.096437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.096453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.096625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.096645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.096815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.096835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.096941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.096961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.097109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.097128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.097213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.097231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.097327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.097346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.097441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.097458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.097529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.097545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.097620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.097636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.097712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.097730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.097805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.097822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.097930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.097947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.098023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.098040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.098136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.098155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.098231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.098249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.098338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.098354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.098435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.098452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.098524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.098542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.098705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.098728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.098822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.098839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.098986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.099007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.099101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.099120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.099285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.099303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.099389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.099402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.099542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.099556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.099710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.099724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.099803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.099816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.099893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.099905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.099965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.099978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.100037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.100050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.100182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.100195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.100282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.100296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.247 qpair failed and we were unable to recover it. 00:30:23.247 [2024-11-20 08:27:37.100475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.247 [2024-11-20 08:27:37.100489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.100571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.100583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.100649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.100661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.100730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.100743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.100817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.100829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.101029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.101049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.101212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.101232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.101333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.101351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.101445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.101462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.101559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.101576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.101727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.101744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.101911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.101929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.102019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.102033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.102115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.102130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.102227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.102243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.102330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.102347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.102415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.102431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.102513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.102528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.102710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.102727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.102814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.102829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.102898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.102914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.103003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.103017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.103164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.103182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.103273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.103286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.103373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.103385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.103451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.103461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.103554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.103568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.103696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.103709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.103776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.103786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.103872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.103882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.103952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.103963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.104030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.104041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.104105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.104116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.104189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.104199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.104278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.104289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.104353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.104364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.104434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.104445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.104506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.248 [2024-11-20 08:27:37.104517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.248 qpair failed and we were unable to recover it. 00:30:23.248 [2024-11-20 08:27:37.104592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.104603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.104658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.104668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.104736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.104747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.104805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.104818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.104971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.104986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.105136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.105152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.105240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.105256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.105328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.105342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.105487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.105503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.105584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.105601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.105767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.105785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.105867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.105883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.105976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.105992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.106142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.106157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.106222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.106234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.106311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.106323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.106391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.106401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.106465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.106476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.106625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.106637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.106711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.106722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.106787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.106798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.106877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.106888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.107015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.107027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.107098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.107108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.107187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.107199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.107281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.107292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.107354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.107365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.107422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.107432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.107563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.107577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.107649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.107661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.107745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.107762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.107914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.107930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.108005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.108023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.108092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.249 [2024-11-20 08:27:37.108108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.249 qpair failed and we were unable to recover it. 00:30:23.249 [2024-11-20 08:27:37.108177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.108191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.108288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.108304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.108376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.108393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.108534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.108552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.108621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.108637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.108832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.108845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.108920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.108931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.109005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.109016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.109076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.109087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.109231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.109244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.109324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.109335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.109393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.109404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.109469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.109480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.109535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.109546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.109604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.109615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.109768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.109779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.109837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.109847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.110015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.110027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.110100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.110111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.110176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.110187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.110268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.110285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.110363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.110377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.110527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.110544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.110688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.110705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.110804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.110821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.110970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.110985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.111066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.111079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.111143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.111154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.111222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.111234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.111293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.111304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.111375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.111385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.111442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.111453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.111518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.111529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.111606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.111617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.111691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.111707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.111781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.111794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.111939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.111953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.250 qpair failed and we were unable to recover it. 00:30:23.250 [2024-11-20 08:27:37.112090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.250 [2024-11-20 08:27:37.112103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.112172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.112185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.112278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.112292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.112358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.112370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.112515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.112536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.112721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.112740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.112909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.112928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.113016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.113035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.113123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.113139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.113215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.113228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.113316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.113330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.113401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.113412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.113480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.113492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.113588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.113601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.113668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.113680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.113861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.113873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.114047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.114060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.114260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.114274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.114350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.114362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.114510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.114523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.114652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.114669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.114761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.114779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.114859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.114877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.114955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.114974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.115059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.115079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.115181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.115200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.115306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.115325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.115491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.115508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.115593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.115607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.115749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.115763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.115833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.115847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.116046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.116060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.116188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.116207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.116299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.116313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.116385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.116398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.116605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.116618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.116692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.116705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.251 qpair failed and we were unable to recover it. 00:30:23.251 [2024-11-20 08:27:37.116782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.251 [2024-11-20 08:27:37.116798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.116877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.116890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.117044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.117062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.117157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.117175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.117258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.117279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.117369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.117388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.117558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.117579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.117672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.117691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.117791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.117809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.117894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.117910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.117988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.118002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.118075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.118089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.118156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.118171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.118254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.118269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.118339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.118353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.118420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.118434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.118637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.118650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.118709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.118722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.118794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.118807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.118950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.118964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.119098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.119111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.119174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.119186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.119354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.119368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.119529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.119550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.119698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.119717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.119822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.119841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.119923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.119941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.120046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.120066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.120211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.120228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.120296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.120308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.120384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.120397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.121127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.121152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.121313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.121327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.121402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.121416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.121569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.121582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.121748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.121772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.121876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.121898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.122125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.252 [2024-11-20 08:27:37.122148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.252 qpair failed and we were unable to recover it. 00:30:23.252 [2024-11-20 08:27:37.122253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.122276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.122378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.122400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.122492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.122520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.122620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.122646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.122740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.122756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.122936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.122951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.123100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.123117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.123259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.123276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.123381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.123396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.123532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.123549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.123634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.123649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.123828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.123843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.124077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.124101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.124217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.124239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.124342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.124363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.124511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.124535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.124730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.124753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.124844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.124867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.124978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.124998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.125084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.125101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.125178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.125194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.125410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.125427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.125574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.125590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.125671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.125687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.126577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.126612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.126795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.126820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.126974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.126998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.127236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.127262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.127460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.127477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.127620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.127636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.127713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.127729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.127805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.127822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.127969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.127985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.128121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.128137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.128273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.128290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.128361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.128376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.128512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.128528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.128614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.253 [2024-11-20 08:27:37.128630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.253 qpair failed and we were unable to recover it. 00:30:23.253 [2024-11-20 08:27:37.128720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.128743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.128900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.128922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.129088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.129111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.129187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.129227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.129358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.129385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.129466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.129489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.129579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.129601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.129770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.129789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.129861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.129877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.129969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.129986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.130123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.130139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.130351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.130367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.130505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.130521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.130670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.130687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.130833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.130848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.131001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.131020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.131098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.131121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.131273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.131296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.131388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.131410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.131518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.131542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.131653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.131676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.131791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.131818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.131992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.132021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.132125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.132147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.132250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.132271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.132449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.132469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.132626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.132645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.132869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.132889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.133032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.133053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.133139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.133158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.133258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.133279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.133386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.133406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.133564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.133593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.254 [2024-11-20 08:27:37.133698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.254 [2024-11-20 08:27:37.133726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.254 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.133829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.133856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.133963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.133991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.134107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.134136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.134314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.134344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.134449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.134477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.134598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.134626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.134728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.134757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.135008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.135036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.135214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.135245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.135356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.135385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.135479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.135514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.135692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.135720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.135891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.135921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.136017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.136045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.136221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.136250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.136351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.136378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.136549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.136578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.136754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.136782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.136955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.136984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.137097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.137125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.137228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.137256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.137487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.137515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.137686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.137716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.137810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.137838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.137964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.137992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.138101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.138129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.138359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.138391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.138560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.138587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.138860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.138889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.138993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.139021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.139198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.139234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.139416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.139445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.139629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.139657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.139774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.139801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.140009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.140038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.140207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.140236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.255 [2024-11-20 08:27:37.140330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.255 [2024-11-20 08:27:37.140357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.255 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.140585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.140620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.140751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.140779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.140887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.140915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.141018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.141045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.141260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.141291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.141394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.141423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.141522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.141549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.141686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.141708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.141864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.141888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.142037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.142060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.142256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.142281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.142452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.142475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.142569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.142592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.142684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.142702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.142783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.142799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.142935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.142950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.143087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.143103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.143248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.143266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.143361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.143377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.143450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.143465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.143543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.143560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.143704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.143720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.143871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.143887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.144050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.144065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.144141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.144155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.144232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.144248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.144466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.144489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.144597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.144619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.144851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.144876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.144962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.144984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.145085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.145108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.145198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.145229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.145314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.145330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.145398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.145413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.145554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.145570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.145662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.145678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.145759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.256 [2024-11-20 08:27:37.145774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.256 qpair failed and we were unable to recover it. 00:30:23.256 [2024-11-20 08:27:37.145868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.145884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.145979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.145995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.146161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.146176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.146254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.146274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.146344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.146359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.146494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.146509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.146608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.146624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.146731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.146751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.147003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.147027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.147118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.147140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.147226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.147250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.147355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.147378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.147531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.147554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.147781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.147801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.147888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.147904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.148047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.148064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.148153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.148168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.148330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.148346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.148432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.148448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.148593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.148610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.148761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.148778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.148854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.148869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.149018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.149034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.149112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.149127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.149231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.149256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.149352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.149374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.149474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.149497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.149581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.149604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.149693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.149716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.149819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.149842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.149949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.149971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.150120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.150142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.150251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.150271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.150428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.150444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.150604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.150620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.150692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.150707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.150774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.150788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.150860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.150875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.150970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.150985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.151058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.151073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.151219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.257 [2024-11-20 08:27:37.151236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.257 qpair failed and we were unable to recover it. 00:30:23.257 [2024-11-20 08:27:37.151384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.151401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.151496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.151512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.151652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.151672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.151746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.151766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.151960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.151982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.152135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.152171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.152347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.152377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.152500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.152528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.152701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.152729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.152901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.152930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.153093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.153121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.153221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.153249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.153349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.153377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.153488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.153517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.153624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.153652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.153818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.153846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.153973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.154001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.154113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.154142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.154260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.154289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.154393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.154422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.154521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.154548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.154651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.154678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.154839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.154868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.155046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.155074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.155183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.155221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.155337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.155366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.155479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.155507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.155606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.155633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.155828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.155857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.155961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.155990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.156177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.156233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.156438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.156466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.156643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.156671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.156831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.156858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.156960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.156987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.157157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.157186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.258 [2024-11-20 08:27:37.157305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.258 [2024-11-20 08:27:37.157333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.258 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.157514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.157541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.157665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.157695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.157794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.157824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.158041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.158069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.158233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.158270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.158385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.158417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.158599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.158627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.158728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.158756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.158918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.158947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.159049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.159077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.159328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.159358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.159458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.159486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.159727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.159755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.159873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.159901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.160018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.160046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.160159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.160186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.160421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.160450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.160556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.160584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.160746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.160774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.160893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.160922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.161095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.161124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.161238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.161269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.161452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.161480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.161646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.161674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.161880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.161910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.162138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.162158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.162258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.162279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.162443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.162462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.162565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.162586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.162732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.162751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.162900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.162920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.163025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.163044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.163130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.163149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.163300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.163321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.163487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.163508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.163606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.163625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.163771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.163791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.163888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.163907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.164013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.164034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.164116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.164130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.259 qpair failed and we were unable to recover it. 00:30:23.259 [2024-11-20 08:27:37.164289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.259 [2024-11-20 08:27:37.164304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.164453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.164467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.164538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.164551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.164626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.164640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.164711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.164723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.164788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.164805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.164871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.164884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.164988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.165002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.165075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.165087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.165288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.165302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.165448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.165461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.165538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.165550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.165616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.165628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.165708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.165725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.165930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.165949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.166044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.166063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.166215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.166236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.166317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.166335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.166418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.166447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.166555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.166570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.166639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.166651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.166720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.166732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.166790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.166802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.166881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.166893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.166963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.166976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.167124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.167138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.167224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.167239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.167383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.167396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.167476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.167488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.167556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.167569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.167638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.167650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.167711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.167724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.167818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.167831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.167917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.167931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.168041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.168060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.168149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.168168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.168325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.168345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.168505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.168525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.168735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.168753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.168889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.168903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.169069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.260 [2024-11-20 08:27:37.169083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.260 qpair failed and we were unable to recover it. 00:30:23.260 [2024-11-20 08:27:37.169154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.169167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.169253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.169266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.169406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.169420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.169508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.169522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.169606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.169622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.169700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.169713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.169839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.169854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.169929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.169942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.170070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.170084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.170161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.170173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.170261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.170276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.170359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.170378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.170465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.170484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.170589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.170607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.170687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.170705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.170847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.170867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.170944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.170963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.171055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.171076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.171173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.171189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.171333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.171348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.171479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.171492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.171582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.171595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.171680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.171693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.171770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.171783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.171912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.171926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.172071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.172084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.172167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.172181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.172263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.172276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.172360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.172373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.172462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.172475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.172555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.172569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.172654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.172672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.172755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.172776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.172935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.172957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.173047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.173071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.173154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.173176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.173395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.173420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.173587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.173613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.173702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.173720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.261 [2024-11-20 08:27:37.173809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.261 [2024-11-20 08:27:37.173826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.261 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.173942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.173959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.174028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.174043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.174113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.174128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.174219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.174235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.174321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.174342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.174483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.174500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.174636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.174652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.174723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.174737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.174881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.174898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.174966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.174982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.175053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.175068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.175215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.175239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.175333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.175356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.175450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.175473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.175578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.175602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.175777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.175802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.175889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.175912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.176003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.176026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.176185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.176213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.176287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.176304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.176449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.176466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.176538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.176553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.176635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.176652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.176859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.176876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.176959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.176977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.177056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.177073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.177148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.177165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.177317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.177334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.177410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.177425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.177497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.177513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.177613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.177629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.177716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.177738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.177898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.177922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.178112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.178135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.178289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.178314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.178428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.178452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.178539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.178562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.178727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.178748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.178826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.178842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.178924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.178941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.179022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.179038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.179116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.179132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.179220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.262 [2024-11-20 08:27:37.179238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.262 qpair failed and we were unable to recover it. 00:30:23.262 [2024-11-20 08:27:37.179449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.179466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.179553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.179573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.179637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.179652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.179865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.179882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.179978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.179995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.180183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.180200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.180348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.180372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.180534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.180558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.180816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.180840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.181001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.181024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.181115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.181139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.181233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.181251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.181323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.181340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.181485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.181501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.181648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.181665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.181743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.181760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.181848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.181866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.181940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.181956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.182158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.182174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.182433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.182452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.182535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.182552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.182725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.182750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.182914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.182942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.183117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.183147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.183313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.183344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.183453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.183482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.183580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.263 [2024-11-20 08:27:37.183609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.263 qpair failed and we were unable to recover it. 00:30:23.263 [2024-11-20 08:27:37.183774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.183803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.184034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.184106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.184265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.184304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.184434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.184468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.184598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.184631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.184809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.184842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.184969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.185002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.185112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.185145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.185396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.185430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.185608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.185641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.185841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.185874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.186158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.186192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.186447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.186480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.186653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.186686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.186812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.186854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.186979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.187012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.187119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.187152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.187283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.187317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.187443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.187475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.187592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.187625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.187744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.187777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.187899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.187932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.188046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.188079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.188258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.188293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.188561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.188593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.188788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.188821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.189011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.189045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.189173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.189213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.189439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.189473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.189649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.189682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.189811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.189843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.189969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.190002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.190132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.190164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.190351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.190391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.190582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.190615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.190729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.190762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.264 [2024-11-20 08:27:37.190875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.264 [2024-11-20 08:27:37.190908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.264 qpair failed and we were unable to recover it. 00:30:23.265 [2024-11-20 08:27:37.191098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.265 [2024-11-20 08:27:37.191130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.265 qpair failed and we were unable to recover it. 00:30:23.265 [2024-11-20 08:27:37.191241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.265 [2024-11-20 08:27:37.191277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.265 qpair failed and we were unable to recover it. 00:30:23.265 [2024-11-20 08:27:37.191457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.265 [2024-11-20 08:27:37.191489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.265 qpair failed and we were unable to recover it. 00:30:23.265 [2024-11-20 08:27:37.191676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.265 [2024-11-20 08:27:37.191710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.265 qpair failed and we were unable to recover it. 00:30:23.265 [2024-11-20 08:27:37.191882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.265 [2024-11-20 08:27:37.191904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.265 qpair failed and we were unable to recover it. 00:30:23.265 [2024-11-20 08:27:37.192009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.265 [2024-11-20 08:27:37.192030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.265 qpair failed and we were unable to recover it. 00:30:23.265 [2024-11-20 08:27:37.192186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.265 [2024-11-20 08:27:37.192232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.265 qpair failed and we were unable to recover it. 00:30:23.265 [2024-11-20 08:27:37.192328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.265 [2024-11-20 08:27:37.192357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.265 qpair failed and we were unable to recover it. 00:30:23.265 [2024-11-20 08:27:37.192469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.265 [2024-11-20 08:27:37.192497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.265 qpair failed and we were unable to recover it. 00:30:23.265 [2024-11-20 08:27:37.192612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.265 [2024-11-20 08:27:37.192641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.265 qpair failed and we were unable to recover it. 00:30:23.265 [2024-11-20 08:27:37.192752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.265 [2024-11-20 08:27:37.192781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.265 qpair failed and we were unable to recover it. 00:30:23.265 [2024-11-20 08:27:37.192882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.265 [2024-11-20 08:27:37.192911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.265 qpair failed and we were unable to recover it. 00:30:23.265 [2024-11-20 08:27:37.193100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.265 [2024-11-20 08:27:37.193127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.265 qpair failed and we were unable to recover it. 00:30:23.265 [2024-11-20 08:27:37.193243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.265 [2024-11-20 08:27:37.193270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.265 qpair failed and we were unable to recover it. 00:30:23.265 [2024-11-20 08:27:37.193435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.265 [2024-11-20 08:27:37.193460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.265 qpair failed and we were unable to recover it. 00:30:23.265 [2024-11-20 08:27:37.193632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.265 [2024-11-20 08:27:37.193660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.265 qpair failed and we were unable to recover it. 00:30:23.265 [2024-11-20 08:27:37.193762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.265 [2024-11-20 08:27:37.193787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.265 qpair failed and we were unable to recover it. 00:30:23.265 [2024-11-20 08:27:37.193945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.265 [2024-11-20 08:27:37.193975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.265 qpair failed and we were unable to recover it. 00:30:23.265 [2024-11-20 08:27:37.194061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.265 [2024-11-20 08:27:37.194087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.265 qpair failed and we were unable to recover it. 00:30:23.265 [2024-11-20 08:27:37.194186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.265 [2024-11-20 08:27:37.194216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.265 qpair failed and we were unable to recover it. 00:30:23.593 [2024-11-20 08:27:37.194344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.593 [2024-11-20 08:27:37.194371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.593 qpair failed and we were unable to recover it. 00:30:23.593 [2024-11-20 08:27:37.194529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.593 [2024-11-20 08:27:37.194554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.593 qpair failed and we were unable to recover it. 00:30:23.593 [2024-11-20 08:27:37.194658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.593 [2024-11-20 08:27:37.194684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.593 qpair failed and we were unable to recover it. 00:30:23.593 [2024-11-20 08:27:37.194840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.593 [2024-11-20 08:27:37.194865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.593 qpair failed and we were unable to recover it. 00:30:23.593 [2024-11-20 08:27:37.194966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.593 [2024-11-20 08:27:37.194992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.593 qpair failed and we were unable to recover it. 00:30:23.593 [2024-11-20 08:27:37.195218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.593 [2024-11-20 08:27:37.195246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.593 qpair failed and we were unable to recover it. 00:30:23.593 [2024-11-20 08:27:37.195476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.593 [2024-11-20 08:27:37.195503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.593 qpair failed and we were unable to recover it. 00:30:23.593 [2024-11-20 08:27:37.195679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.593 [2024-11-20 08:27:37.195705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.593 qpair failed and we were unable to recover it. 00:30:23.593 [2024-11-20 08:27:37.195879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.593 [2024-11-20 08:27:37.195905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.593 qpair failed and we were unable to recover it. 00:30:23.593 [2024-11-20 08:27:37.196083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.593 [2024-11-20 08:27:37.196108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.593 qpair failed and we were unable to recover it. 00:30:23.593 [2024-11-20 08:27:37.196270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.593 [2024-11-20 08:27:37.196298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.593 qpair failed and we were unable to recover it. 00:30:23.593 [2024-11-20 08:27:37.196410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.593 [2024-11-20 08:27:37.196436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.593 qpair failed and we were unable to recover it. 00:30:23.593 [2024-11-20 08:27:37.196550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.593 [2024-11-20 08:27:37.196576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.593 qpair failed and we were unable to recover it. 00:30:23.593 [2024-11-20 08:27:37.196676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.593 [2024-11-20 08:27:37.196701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.593 qpair failed and we were unable to recover it. 00:30:23.593 [2024-11-20 08:27:37.196855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.593 [2024-11-20 08:27:37.196883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.593 qpair failed and we were unable to recover it. 00:30:23.593 [2024-11-20 08:27:37.197040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.593 [2024-11-20 08:27:37.197066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.593 qpair failed and we were unable to recover it. 00:30:23.593 [2024-11-20 08:27:37.197228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.593 [2024-11-20 08:27:37.197255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.197352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.197373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.197522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.197541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.197630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.197648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.197802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.197820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.197979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.197997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.198131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.198149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.198316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.198335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.198489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.198507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.198594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.198612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.198715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.198733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.198876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.198894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.198989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.199012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.199196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.199241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.199404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.199430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.199606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.199633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.199857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.199883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.199981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.200006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.200101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.200127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.200287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.200314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.200482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.200507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.200662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.200692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.200865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.200890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.201003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.201030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.201149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.201175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.201293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.201320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.201411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.201434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.201591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.201618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.201754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.201780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.201943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.201970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.202058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.202083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.202189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.202224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.202398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.202425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.202625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.202652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.202751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.202776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.202895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.202928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.203032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.203052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.203173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.594 [2024-11-20 08:27:37.203196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.594 qpair failed and we were unable to recover it. 00:30:23.594 [2024-11-20 08:27:37.203319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.203341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.203447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.203469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.203561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.203582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.203730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.203751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.203840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.203861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.204013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.204032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.204182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.204197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.204275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.204290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.204445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.204460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.204594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.204609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.204689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.204704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.204806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.204822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.204959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.204974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.205045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.205059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.205170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.205185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.205263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.205276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.205420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.205435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.205516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.205531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.205696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.205716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.205866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.205887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.206006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.206028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.206131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.206152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.206260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.206279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.206356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.206381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.206567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.206585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.206735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.206750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.206820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.206834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.206971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.206986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.207086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.207101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.207250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.207266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.207347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.207360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.207503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.207518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.207679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.207695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.207776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.207791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.207868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.207882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.208037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.208052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.208220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.208243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.595 [2024-11-20 08:27:37.208344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.595 [2024-11-20 08:27:37.208365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.595 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.208462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.208483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.208631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.208654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.208871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.208893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.209104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.209122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.209275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.209291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.209386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.209402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.209536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.209551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.209615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.209629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.209761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.209777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.209845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.209859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.209943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.209959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.210049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.210065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.210156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.210171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.210254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.210268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.210415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.210430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.210514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.210528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.210616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.210638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.210720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.210740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.210834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.210854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.210999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.211022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.211182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.211208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.211356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.211378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.211532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.211552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.211766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.211782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.211854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.211868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.211963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.211982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.212120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.212152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.212329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.212364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.212538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.212581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.212731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.212746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.212825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.212839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.213006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.213021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.213118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.213151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.213288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.213323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.213503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.213536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.213646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.213679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.213864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.213896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.596 [2024-11-20 08:27:37.214091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.596 [2024-11-20 08:27:37.214125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.596 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.214369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.214404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.214588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.214622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.214736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.214769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.214908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.214941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.215066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.215099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.215366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.215402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.215610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.215642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.215855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.215887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.216024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.216058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.216253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.216288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.216405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.216439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.216702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.216734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.216932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.216964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.217080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.217114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.217307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.217348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.217456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.217489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.217605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.217639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.217811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.217843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.218026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.218059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.218318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.218352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.218569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.218602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.218716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.218749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.218940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.218973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.219198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.219241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.219368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.219402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.219596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.219631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.219739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.219771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.219968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.220001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.220193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.220238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.220443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.220476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.220598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.220631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.220805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.220848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.221023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.221056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.221170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.221227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.221413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.221445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.221555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.221589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.221773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.221806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.597 qpair failed and we were unable to recover it. 00:30:23.597 [2024-11-20 08:27:37.221983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.597 [2024-11-20 08:27:37.222016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.222190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.222236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.222444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.222477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.222688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.222721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.222906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.222940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.223059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.223092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.223266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.223301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.223498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.223532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.223714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.223748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.223932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.223966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.224145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.224178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.224294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.224328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.224539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.224572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.224764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.224798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.224915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.224948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.225061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.225093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.225330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.225365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.225578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.225617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.225725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.225759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.226014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.226048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.226236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.226271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.226451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.226484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.226731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.226765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.227008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.227040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.227154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.227188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.227390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.227424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.227548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.227581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.227824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.227857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.227974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.228008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.228248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.228282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.598 [2024-11-20 08:27:37.228466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.598 [2024-11-20 08:27:37.228499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.598 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.228676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.228710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.228902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.228936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.229111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.229145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.229302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.229336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.229544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.229577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.229746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.229780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.229919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.229952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.230144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.230177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.230301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.230336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.230470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.230504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.230745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.230779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.231018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.231051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.231220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.231255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.231444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.231478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.231666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.231700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.231885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.231919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.232104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.232136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.232325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.232361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.232536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.232570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.232748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.232782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.232968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.233002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.233213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.233248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.233366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.233400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.233602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.233635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.233748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.233781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.233896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.233929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.234197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.234245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.234459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.234492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.234664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.234698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.234870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.234904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.235077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.235110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.235359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.235394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.235583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.235617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.235830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.235864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.236057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.236091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.236266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.236301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.236424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.236457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.599 [2024-11-20 08:27:37.236566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.599 [2024-11-20 08:27:37.236600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.599 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.236775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.236808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.237073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.237106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.237305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.237341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.237554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.237588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.237761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.237795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.237979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.238013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.238153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.238187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.238317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.238351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.238541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.238574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.238760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.238794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.238913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.238947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.239053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.239086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.239260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.239297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.239419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.239453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.239563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.239597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.239865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.239900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.240031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.240065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.240244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.240278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.240481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.240514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.240637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.240671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.240857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.240891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.241075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.241108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.241396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.241432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.241569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.241601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.241736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.241770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.241957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.241990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.242211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.242245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.242387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.242420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.242563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.242602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.242790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.242824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.243001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.243034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.243219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.243253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.243386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.243419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.243640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.243673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.243787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.243821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.244032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.244066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.244274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.244309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.600 [2024-11-20 08:27:37.244443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.600 [2024-11-20 08:27:37.244475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.600 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.244666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.244702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.244895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.244927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.245102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.245135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.245250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.245285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.245535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.245568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.245685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.245718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.245841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.245875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.246012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.246045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.246158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.246191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.246442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.246475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.246606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.246639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.246750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.246783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.246989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.247022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.247153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.247186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.247373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.247407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.247601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.247635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.247757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.247789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.247915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.247950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.248141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.248174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.248398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.248434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.248618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.248651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.248844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.248878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.248993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.249026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.249196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.249260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.249381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.249412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.249686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.249719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.249912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.249945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.250117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.250151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.250278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.250313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.250523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.250556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.250825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.250865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.251053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.251087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.251195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.251241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.251503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.251536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.251736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.251769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.251947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.251980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.252275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.252309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.601 [2024-11-20 08:27:37.252500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.601 [2024-11-20 08:27:37.252533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.601 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.252665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.252698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.252825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.252857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.252981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.253014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.253191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.253233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.253347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.253381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.253558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.253591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.253724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.253758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.254049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.254083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.254209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.254245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.254353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.254386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.254629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.254663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.254839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.254872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.255055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.255089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.255194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.255237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.255448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.255481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.255664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.255698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.255878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.255913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.256025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.256058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.256296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.256332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.256514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.256547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.256656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.256689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.256877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.256911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.257117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.257151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.257376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.257412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.257599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.257633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.257732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.257765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.257892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.257925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.258116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.258150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.258354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.258389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.258567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.258600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.258847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.258881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.258991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.259024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.259154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.602 [2024-11-20 08:27:37.259193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.602 qpair failed and we were unable to recover it. 00:30:23.602 [2024-11-20 08:27:37.259403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.259436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.259549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.259582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.259827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.259861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.260060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.260093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.260278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.260313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.260450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.260482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.260661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.260694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.260869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.260902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.261019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.261052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.261178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.261218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.261419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.261452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.261569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.261603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.261806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.261838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.261977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.262012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.262142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.262175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.262428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.262461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.262574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.262607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.262784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.262818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.263085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.263118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.263381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.263417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.263695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.263729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.263904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.263938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.264112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.264146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.264283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.264318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.264446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.264478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.264662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.264696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.264895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.264929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.265057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.265091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.265301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.265336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.265624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.265658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.265861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.265894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.266013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.266045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.266171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.266213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.266341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.266374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.266651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.266683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.266887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.266920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.267090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.267124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.267295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.603 [2024-11-20 08:27:37.267330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.603 qpair failed and we were unable to recover it. 00:30:23.603 [2024-11-20 08:27:37.267501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.267535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.267722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.267761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.267938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.267972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.268155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.268188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.268450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.268485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.268745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.268780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.269044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.269078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.269289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.269324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.269511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.269545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.269715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.269749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.270014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.270047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.270164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.270197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.270391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.270424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.270546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.270580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.270696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.270730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.270907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.270940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.271116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.271150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.271346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.271382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.271560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.271595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.271780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.271814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.272069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.272103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.272327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.272362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.272608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.272641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.272817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.272851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.273026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.273058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.273247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.273282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.273398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.273432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.273677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.273711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.273968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.274002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.274118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.274152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.274266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.274301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.274483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.274516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.274637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.274670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.274786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.274819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.275084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.275118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.275286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.275321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.275504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.604 [2024-11-20 08:27:37.275537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.604 qpair failed and we were unable to recover it. 00:30:23.604 [2024-11-20 08:27:37.275724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.275758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.275945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.275978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.276243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.276278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.276524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.276558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.276676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.276717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.276920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.276952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.277136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.277169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.277420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.277456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.277644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.277678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.277863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.277896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.278036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.278071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.278182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.278224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.278430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.278464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.278572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.278605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.278779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.278812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.279003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.279037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.279156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.279190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.279321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.279355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.279548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.279581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.279849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.279883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.280165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.280198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.280334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.280368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.280484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.280517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.280620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.280654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.280896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.280930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.281170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.281211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.281474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.281508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.281644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.281678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.281799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.281837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.282017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.282052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.282195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.282240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.282582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.282655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.282803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.282841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.282959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.282994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.283238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.283275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.283463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.283495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.283784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.283817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.605 [2024-11-20 08:27:37.284072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.605 [2024-11-20 08:27:37.284105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.605 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.284347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.284382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.284571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.284606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.284796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.284830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.285093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.285125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.285252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.285287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.285460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.285493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.285730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.285773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.285881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.285915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.286123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.286157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.286299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.286334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.286463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.286495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.286695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.286727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.286847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.286880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.287059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.287093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.287261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.287303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.287435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.287468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.287713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.287747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.287941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.287975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.288157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.288190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.288393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.288427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.288675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.288708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.288840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.288874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.289194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.289237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.289372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.289405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.289594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.289627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.289740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.289774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.289900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.289932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.290066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.290099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.290223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.290258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.290443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.290477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.290612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.290644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.290862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.290896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.291016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.291049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.291321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.291356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.291494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.291527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.291780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.291813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.292058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.292091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.606 [2024-11-20 08:27:37.292310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.606 [2024-11-20 08:27:37.292344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.606 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.292474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.292507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.292746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.292778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.292898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.292931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.293055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.293088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.293263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.293297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.293554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.293586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.293762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.293796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.293978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.294011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.294195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.294250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.294465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.294499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.294688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.294721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.294905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.294938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.295117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.295150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.295344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.295379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.295561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.295594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.295780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.295814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.295999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.296033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.296232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.296268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.296407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.296441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.296609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.296641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.296782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.296815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.296921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.296952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.297075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.297108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.297225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.297260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.297393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.297426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.297676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.297708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.297832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.297866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.297976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.298009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.298146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.298180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.298304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.298338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.298453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.298485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.298686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.298719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.298840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.607 [2024-11-20 08:27:37.298873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.607 qpair failed and we were unable to recover it. 00:30:23.607 [2024-11-20 08:27:37.299055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.299089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.299267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.299301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.299504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.299537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.299805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.299838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.300082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.300115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.300246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.300282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.300458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.300490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.300620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.300652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.300840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.300873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.301062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.301095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.301375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.301410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.301614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.301647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.301896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.301928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.302109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.302143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.302411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.302448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.302630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.302670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.302875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.302907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.303082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.303114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.303287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.303322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.303461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.303494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.303601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.303634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.303832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.303866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.304101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.304134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.304327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.304361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.304501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.304534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.304716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.304750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.304942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.304974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.305212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.305248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.305433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.305467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.305647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.305681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.305802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.305835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.306008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.306041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.306245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.306281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.306553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.306586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.306712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.306745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.306936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.306969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.307149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.307181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.608 qpair failed and we were unable to recover it. 00:30:23.608 [2024-11-20 08:27:37.307312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.608 [2024-11-20 08:27:37.307346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.307452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.307484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.307671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.307704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.307952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.307985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.308165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.308198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.308360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.308394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.308587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.308620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.308730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.308763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.309024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.309057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.309186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.309228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.309446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.309479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.309652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.309685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.309796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.309830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.310069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.310103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.310281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.310315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.310441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.310474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.310580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.310612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.310744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.310776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.311013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.311053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.311310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.311346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.311596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.311629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.311807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.311841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.311957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.311991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.312187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.312231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.312423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.312457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.312655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.312689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.312807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.312840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.313016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.313048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.313285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.313320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.313502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.313535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.313785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.313818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.314037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.314070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.314275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.314311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.314442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.314476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.314612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.314644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.314826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.314859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.315032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.315065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.315251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.609 [2024-11-20 08:27:37.315286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.609 qpair failed and we were unable to recover it. 00:30:23.609 [2024-11-20 08:27:37.315414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.315448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.315567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.315600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.315803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.315836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.315945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.315978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.316153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.316185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.316305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.316339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.316514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.316547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.316727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.316802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.316959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.316995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.317172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.317225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.317422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.317456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.317632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.317666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.317795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.317829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.317953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.317986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.318102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.318133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.318246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.318281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.318452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.318486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.318670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.318703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.318818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.318853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.319073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.319106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.319221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.319255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.319383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.319416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.319588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.319621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.319760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.319794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.320000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.320034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.320240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.320276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.320451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.320484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.320663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.320698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.320873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.320906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.321148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.321182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.321369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.321403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.321571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.321604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.321726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.321760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.322046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.322081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.322267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.322309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.322429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.322462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.322643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.322676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.322877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.322910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.610 qpair failed and we were unable to recover it. 00:30:23.610 [2024-11-20 08:27:37.323114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.610 [2024-11-20 08:27:37.323147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.323355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.323388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.323631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.323664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.323865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.323898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.324034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.324068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.324261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.324295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.324485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.324517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.324696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.324728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.324837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.324871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.325139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.325173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.325331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.325365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.325479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.325511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.325646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.325679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.325955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.325987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.326161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.326195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.326342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.326377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.326564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.326596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.326708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.326741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.326843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.326876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.327081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.327115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.327296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.327332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.327461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.327494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.327605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.327638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.327744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.327783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.327898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.327932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.328050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.328084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.328280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.328315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.328493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.328527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.328637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.328670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.328779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.328813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.329052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.329086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.329263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.329297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.329532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.329565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.329833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.329866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.330006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.611 [2024-11-20 08:27:37.330039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.611 qpair failed and we were unable to recover it. 00:30:23.611 [2024-11-20 08:27:37.330158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.330191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.330383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.330417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.330615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.330649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.330790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.330824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.330997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.331031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.331136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.331169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.331418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.331453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.331581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.331614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.331721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.331755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.331931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.331964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.332200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.332245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.332512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.332545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.332733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.332766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.333030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.333063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.333185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.333229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.333469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.333503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.333618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.333651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.333832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.333866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.333990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.334023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.334159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.334193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.334390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.334424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.334552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.334586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.334758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.334791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.334966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.335000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.335186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.335230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.335432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.335466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.335640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.335673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.335803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.335837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.336034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.336067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.336264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.336300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.336512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.336546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.336723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.336756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.336890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.336923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.337049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.337082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.337254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.337290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.337531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.337564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.337684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.337718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.337895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.612 [2024-11-20 08:27:37.337929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.612 qpair failed and we were unable to recover it. 00:30:23.612 [2024-11-20 08:27:37.338194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.338237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.338412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.338446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.338563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.338597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.338803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.338836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.339012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.339045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.339170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.339213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.339385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.339418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.339539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.339572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.339794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.339827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.340007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.340041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.340281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.340315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.340440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.340472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.340648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.340681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.340796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.340828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.341009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.341043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.341226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.341261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.341390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.341423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.341603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.341636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.341823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.341868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.341997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.342030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.342220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.342254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.342370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.342404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.342589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.342623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.342742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.342775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.343060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.343093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.343358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.343392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.343608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.343641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.343832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.343867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.344158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.344192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.344438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.344472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.344671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.344704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.344877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.344911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.345102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.345135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.345247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.345282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.345460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.345493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.345678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.345711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.345837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.345871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.346132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.346165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.613 [2024-11-20 08:27:37.346302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.613 [2024-11-20 08:27:37.346337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.613 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.346532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.346566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.346757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.346790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.346976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.347010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.347184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.347226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.347439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.347474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.347651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.347685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.347954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.347992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.348183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.348225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.348465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.348498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.348742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.348776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.348961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.348995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.349177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.349220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.349505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.349538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.349784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.349817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.350026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.350059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.350169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.350210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.350349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.350382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.350520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.350554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.350746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.350779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.350968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.351002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.351191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.351238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.351419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.351453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.351692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.351726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.351852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.351885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.352054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.352088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.352270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.352305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.352546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.352579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.352692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.352725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.352915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.352949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.353219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.353254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.353496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.353530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.353652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.353685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.353925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.353958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.354127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.354166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.354366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.354400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.354517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.354549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.354681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.354714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.614 [2024-11-20 08:27:37.355009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.614 [2024-11-20 08:27:37.355043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.614 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.355283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.355318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.355557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.355589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.355722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.355756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.355866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.355899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.356030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.356063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.356252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.356287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.356483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.356517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.356721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.356754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.356926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.356959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.357136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.357170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.357354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.357388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.357529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.357563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.357829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.357862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.357964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.357998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.358126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.358160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.358342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.358377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.358493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.358527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.358787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.358820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.359077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.359110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.359360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.359395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.359583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.359616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.359832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.359865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.359983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.360017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.360199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.360246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.360507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.360539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.360653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.360686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.360858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.360891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.361132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.361166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.361355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.361389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.361514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.361547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.361812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.361846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.362097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.362131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.362322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.362357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.362464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.362499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.362768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.362801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.363054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.363088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.363358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.363431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.615 [2024-11-20 08:27:37.363562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.615 [2024-11-20 08:27:37.363598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.615 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.363787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.363821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.364008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.364042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.364158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.364190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.364330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.364363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.364546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.364579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.364690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.364723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.364833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.364866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.364999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.365033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.365222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.365255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.365430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.365463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.365583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.365616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.365793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.365833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.366018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.366051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.366164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.366196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.366402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.366435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.366607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.366640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.366846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.366880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.367098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.367129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.367369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.367404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.367537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.367570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.367747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.367779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.367979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.368012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.368149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.368182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.368457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.368490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.368770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.368803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.368926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.368959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.369225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.369258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.369457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.369489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.369674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.369706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.369891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.369924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.370124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.370156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.370287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.370320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.616 [2024-11-20 08:27:37.370505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.616 [2024-11-20 08:27:37.370538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.616 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.370751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.370784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.370955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.370987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.371173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.371216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.371341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.371374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.371488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.371522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.371658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.371697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.371808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.371842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.372028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.372061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.372343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.372376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.372553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.372586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.372723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.372756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.372937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.372970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.373095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.373128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.373249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.373284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.373472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.373506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.373689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.373724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.373980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.374013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.374211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.374245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.374455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.374487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.374621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.374654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.374895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.374927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.375057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.375090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.375281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.375315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.375499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.375530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.375717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.375750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.375939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.375972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.376157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.376189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.376390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.376423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.376527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.376560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.376757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.376790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.376964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.376998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.377198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.377242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.377434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.377467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.377660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.377692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.377822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.377855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.378120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.378154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.378361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.617 [2024-11-20 08:27:37.378394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.617 qpair failed and we were unable to recover it. 00:30:23.617 [2024-11-20 08:27:37.378635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.378669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.378864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.378896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.379005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.379039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.379155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.379187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.379441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.379474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.379643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.379676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.379851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.379884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.380095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.380128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.380312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.380352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.380541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.380574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.380742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.380775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.380947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.380980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.381109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.381143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.381324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.381359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.381565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.381598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.381806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.381839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.382025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.382057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.382258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.382292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.382480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.382512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.382703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.382737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.382916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.382949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.383141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.383174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.383364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.383397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.383572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.383605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.383775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.383808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.383992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.384024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.384280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.384315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.384437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.384471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.384589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.384621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.384742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.384774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.384893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.384925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.385174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.385214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.385328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.385359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.385576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.385610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.385797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.385830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.386095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.386128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.618 [2024-11-20 08:27:37.386309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.618 [2024-11-20 08:27:37.386343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.618 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.386587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.386619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.386726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.386757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.386929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.386962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.387067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.387098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.387350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.387384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.387525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.387557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.387797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.387831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.387962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.387995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.388267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.388302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.388482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.388514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.388643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.388677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.388877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.388915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.389093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.389125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.389304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.389339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.389527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.389560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.389740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.389772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.389956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.389989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.390174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.390214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.390482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.390515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.390709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.390742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.390915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.390949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.391140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.391173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.391461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.391496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.391676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.391708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.391948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.391982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.392127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.392161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.392363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.392397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.392594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.392627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.392894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.392927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.393110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.393143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.393382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.393416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.393682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.393715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.393894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.393928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.394110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.394143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.394324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.394359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.394483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.394516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.394635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.619 [2024-11-20 08:27:37.394667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.619 qpair failed and we were unable to recover it. 00:30:23.619 [2024-11-20 08:27:37.394786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.394819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.394945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.394980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.395169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.395212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.395337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.395370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.395492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.395525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.395701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.395734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.395840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.395872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.396067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.396100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.396283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.396318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.396502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.396536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.396744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.396777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.397019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.397053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.397296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.397330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.397438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.397469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.397651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.397689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.397901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.397933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.398122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.398156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.398342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.398375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.398564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.398597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.398716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.398748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.398939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.398972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.399171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.399229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.399415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.399448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.399568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.399601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.399801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.399834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.400018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.400052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.400245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.400279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.400461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.400493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.400685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.400719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.400896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.400929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.401120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.401153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.401347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.401382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.401519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.401551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.401665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.401698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.401885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.401918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.402046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.402080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.402342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.402376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.402564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.620 [2024-11-20 08:27:37.402597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.620 qpair failed and we were unable to recover it. 00:30:23.620 [2024-11-20 08:27:37.402772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.402805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.403022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.403055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.403171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.403212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.403356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.403390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.403608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.403641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.403827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.403859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.404150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.404184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.404411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.404444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.404652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.404686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.404931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.404964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.405152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.405185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.405324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.405357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.405622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.405655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.405894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.405927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.406173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.406216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.406336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.406369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.406493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.406533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.406793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.406826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.406997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.407030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.407160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.407193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.407416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.407450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.407664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.407696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.407813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.407846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.407973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.408006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.408225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.408258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.408371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.408404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.408588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.408621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.408736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.408769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.408971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.409004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.409143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.409176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.409369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.409404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.409588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.409620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.409895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.409928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.621 [2024-11-20 08:27:37.410182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.621 [2024-11-20 08:27:37.410224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.621 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.410402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.410434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.410625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.410658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.410858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.410891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.411063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.411096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.411292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.411327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.411529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.411561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.411681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.411715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.411921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.411954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.412163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.412196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.412398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.412433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.412675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.412708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.412971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.413004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.413196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.413239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.413483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.413517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.413755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.413786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.413981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.414013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.414221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.414256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.414462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.414494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.414722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.414756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.414931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.414963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.415092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.415124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.415300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.415335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.415534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.415573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.415699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.415730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.415833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.415866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.416078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.416111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.416284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.416317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.416501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.416533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.416724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.416757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.416952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.416985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.417226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.417260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.417378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.417410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.417516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.417549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.417738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.417770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.417879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.417911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.418155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.418188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.418393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.418426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.622 [2024-11-20 08:27:37.418630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.622 [2024-11-20 08:27:37.418663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.622 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.418846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.418879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.419063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.419095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.419269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.419304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.419413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.419445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.419635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.419667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.419838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.419870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.420138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.420170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.420380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.420413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.420595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.420627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.420913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.420946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.421138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.421170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.421318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.421372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.421503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.421536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.421742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.421775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.422048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.422080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.422322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.422356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.422628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.422661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.422770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.422803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.423016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.423049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.423166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.423199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.423484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.423517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.423690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.423723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.423862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.423895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.424133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.424167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.424394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.424433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.424615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.424648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.424760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.424794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.424917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.424949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.425075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.425109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.425286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.425321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.425511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.425544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.425722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.425755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.425967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.426001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.426268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.426300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.426542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.426576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.426718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.426751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.426872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.426904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.623 qpair failed and we were unable to recover it. 00:30:23.623 [2024-11-20 08:27:37.427084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.623 [2024-11-20 08:27:37.427117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.427364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.427398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.427640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.427672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.427939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.427972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.428076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.428109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.428243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.428277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.428475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.428507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.428776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.428810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.428985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.429018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.429187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.429229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.429345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.429378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.429640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.429674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.429891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.429925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.430054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.430088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.430217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.430252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.430375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.430407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.430521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.430554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.430733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.430767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.430890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.430923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.431094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.431126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.431369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.431404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.431527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.431560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.431677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.431711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.431973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.432006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.432213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.432246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.432424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.432457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.432630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.432662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.432848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.432886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.433073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.433106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.433229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.433260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.433500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.433533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.433708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.433740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.433847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.433878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.434081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.434115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.434293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.434329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.434520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.434553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.434725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.434758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.434949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.624 [2024-11-20 08:27:37.434982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.624 qpair failed and we were unable to recover it. 00:30:23.624 [2024-11-20 08:27:37.435117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.435149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.435340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.435375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.435618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.435650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.435785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.435819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.435926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.435960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.436161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.436193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.436389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.436422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.436601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.436634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.436848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.436879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.437066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.437098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.437222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.437257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.437437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.437470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.437585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.437618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.437795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.437828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.438007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.438040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.438251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.438286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.438485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.438519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.438690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.438723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.438972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.439003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.439132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.439165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.439355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.439387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.439557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.439590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.439769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.439802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.440004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.440036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.440221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.440255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.440425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.440458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.440707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.440740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.440917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.440949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.441076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.441109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.441245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.441284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.441548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.441580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.441763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.441796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.441968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.442000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.625 [2024-11-20 08:27:37.442296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.625 [2024-11-20 08:27:37.442331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.625 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.442512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.442545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.442785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.442817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.442955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.442987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.443247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.443281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.443398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.443430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.443688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.443721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.443986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.444019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.444193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.444236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.444437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.444470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.444666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.444699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.444813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.444844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.445018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.445051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.445185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.445227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.445418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.445451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.445559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.445591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.445727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.445760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.445934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.445967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.446111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.446144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.446290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.446324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.446562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.446594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.446815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.446849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.447026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.447057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.447331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.447402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.447612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.447653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.447845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.447879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.448056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.448097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.448331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.448368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.448568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.448602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.448730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.448765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.448880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.448914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.449109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.449142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.449291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.449324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.449532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.449566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.449741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.449773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.449965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.449998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.450237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.450271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.450485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.450518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.626 qpair failed and we were unable to recover it. 00:30:23.626 [2024-11-20 08:27:37.450640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.626 [2024-11-20 08:27:37.450673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.450912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.450945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.451066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.451098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.451285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.451318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.451504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.451537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.451655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.451688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.451806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.451838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.452098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.452132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.452309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.452342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.452463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.452495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.452669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.452701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.452942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.452975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.453116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.453149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.453397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.453431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.453611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.453645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.453828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.453860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.454102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.454135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.454372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.454406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.454676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.454709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.454893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.454925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.455098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.455132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.455257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.455291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.455494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.455527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.455673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.455704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.455881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.455914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.456108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.456146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.456278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.456312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.456575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.456608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.456782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.456814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.456984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.457017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.457155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.457188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.457437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.457470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.457652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.457685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.457806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.457839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.458095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.458129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.458417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.458451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.458632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.458664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.458847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.627 [2024-11-20 08:27:37.458881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.627 qpair failed and we were unable to recover it. 00:30:23.627 [2024-11-20 08:27:37.459014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.459045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.459252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.459287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.459551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.459583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.459754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.459786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.459917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.459950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.460167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.460200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.460384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.460417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.460600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.460633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.460819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.460853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.461037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.461069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.461240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.461275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.461533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.461566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.461748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.461781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.461889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.461919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.462191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.462235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.462408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.462440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.462629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.462663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.462834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.462867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.463108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.463141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.463336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.463370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.463500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.463533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.463724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.463757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.463930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.463963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.464156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.464189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.464490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.464523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.464645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.464678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.464789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.464822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.465007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.465046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.465231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.465265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.465436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.465469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.465596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.465629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.465817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.465850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.466155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.466188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.466436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.466470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.466644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.466678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.466815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.466848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.467042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.467075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.467247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.628 [2024-11-20 08:27:37.467280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.628 qpair failed and we were unable to recover it. 00:30:23.628 [2024-11-20 08:27:37.467400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.467433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.467572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.467605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.467722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.467753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.467947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.467979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.468151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.468185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.468388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.468421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.468686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.468719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.468843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.468876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.468995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.469028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.469273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.469308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.469428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.469458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.469627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.469660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.469846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.469878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.470068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.470101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.470365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.470400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.470585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.470619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.470864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.470898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.471094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.471128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.471329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.471364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.471603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.471636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.471820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.471854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.472065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.472098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.472248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.472283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.472466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.472499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.472620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.472653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.472909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.472943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.473066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.473099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.473363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.473398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.473523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.473555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.473685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.473724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.473927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.473960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.474145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.474179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.474426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.474460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.474726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.474760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.474879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.474912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.475118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.475152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.475290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.629 [2024-11-20 08:27:37.475325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.629 qpair failed and we were unable to recover it. 00:30:23.629 [2024-11-20 08:27:37.475531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.475564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.475691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.475724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.475913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.475945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.476160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.476193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.476448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.476482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.476694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.476727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.476924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.476957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.477134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.477167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.477306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.477340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.477528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.477561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.477743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.477776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.477959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.477992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.478174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.478191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.478283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.478299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.478452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.478472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.478585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.478609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.478707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.478727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.478887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.478909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.479006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.479026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.479112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.479133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.479293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.479316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.479404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.479422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.479640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.479656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.479855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.479871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.480012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.480029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.480243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.480259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.480360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.480374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.480456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.480470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.480555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.480569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.480740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.480755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.480961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.480981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.481079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.481100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.481291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.481317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.481485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.630 [2024-11-20 08:27:37.481508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.630 qpair failed and we were unable to recover it. 00:30:23.630 [2024-11-20 08:27:37.481779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.481803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.481883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.481899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.482043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.482060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.482208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.482224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.482378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.482394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.482597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.482613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.482760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.482775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.482843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.482859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.482994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.483009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.483088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.483103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.483214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.483230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.483328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.483346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.483509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.483532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.483702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.483724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.483884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.483908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.484126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.484149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.484359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.484376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.484529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.484545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.484702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.484719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.484869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.484884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.484980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.484994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.485077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.485091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.485181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.485196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.485289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.485303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.485464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.485480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.485587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.485601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.485749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.485770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.485928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.485949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.486036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.486055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.486141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.486161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.486262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.486283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.486461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.486484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.486571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.486591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.486789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.486811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.486902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.486920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.487090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.487109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.487275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.487296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.631 qpair failed and we were unable to recover it. 00:30:23.631 [2024-11-20 08:27:37.487444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.631 [2024-11-20 08:27:37.487464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.487569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.487593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.487739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.487758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.487932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.487952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.488137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.488156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.488395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.488425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.488520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.488546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.488704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.488734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.488969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.488998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.489252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.489284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.489385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.489407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.489575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.489595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.489696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.489714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.489789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.489807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.489972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.489993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.490214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.490235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.490343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.490362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.490451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.490469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.490643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.490663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.490821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.490840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.491024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.491053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.491299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.491328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.491431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.491458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.491565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.491593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.491756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.491783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.491990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.492017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.492112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.492141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.492306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.492336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.492521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.492550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.492663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.492690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.492799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.492826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.492933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.492960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.493119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.493148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.493368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.493400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.493502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.493530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.493628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.493656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.493754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.493781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.493970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.493998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.494126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.494154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.494327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.494357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.494487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.494517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.632 qpair failed and we were unable to recover it. 00:30:23.632 [2024-11-20 08:27:37.494747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.632 [2024-11-20 08:27:37.494781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.495032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.495061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.495178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.495223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.495332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.495359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.495476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.495504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.495703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.495732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.495919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.495946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.496052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.496080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.496311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.496342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.496506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.496534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.496695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.496722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.496951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.496981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.497221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.497252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.497417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.497446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.497651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.497680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.497919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.497949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.498124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.498152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.498385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.498416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.498648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.498676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.498792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.498820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.498994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.499024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.499266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.499296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.499460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.499488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.499612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.499641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.499815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.499844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.499941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.499969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.500229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.500260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.500512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.500542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.500648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.500676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.500860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.500888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.500996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.501024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.501140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.501170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.501376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.501407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.501583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.501611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.501710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.501735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.501912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.501941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.502060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.502088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.502317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.502347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.502451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.502480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.502736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.502765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.502927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.502961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.633 qpair failed and we were unable to recover it. 00:30:23.633 [2024-11-20 08:27:37.503158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.633 [2024-11-20 08:27:37.503187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.503411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.503440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.503606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.503635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.503873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.503901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.504079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.504109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.504291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.504321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.504510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.504538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.504711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.504741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.504969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.504998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.505185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.505220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.505385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.505413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.505641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.505669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.505842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.505871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.506079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.506107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.506229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.506258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.506372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.506400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.506578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.506607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.506798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.506820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.506966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.506987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.507161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.507182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.507284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.507303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.507393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.507413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.507504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.507521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.507665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.507681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.507846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.507859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.508012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.508026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.508162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.508176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.508403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.508418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.508499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.508512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.508642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.508656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.508802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.508816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.508945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.508960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.509114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.509127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.509290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.509310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.509544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.509564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.509730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.509750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.509839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.509857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.634 [2024-11-20 08:27:37.510013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.634 [2024-11-20 08:27:37.510033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.634 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.510128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.510148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.510243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.510266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.510376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.510396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.510491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.510508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.510601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.510620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.510714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.510731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.510949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.510969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.511123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.511140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.511274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.511289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.511370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.511383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.511447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.511459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.511539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.511552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.511710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.511724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.511858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.511873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.511961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.511974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.512060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.512073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.512221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.512236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.512324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.512337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.512541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.512555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.512633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.512646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.512721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.512738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.512814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.512832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.512923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.512941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.513079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.513100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.513193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.513218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.513377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.513397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.513477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.513495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.513706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.513725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.513815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.513833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.513997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.514011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.514145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.514159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.514365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.514379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.514511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.514525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.514705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.514718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.514894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.514907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.515057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.515070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.515214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.515228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.515315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.515329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.515412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.515430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.515587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.515607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.635 [2024-11-20 08:27:37.515796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.635 [2024-11-20 08:27:37.515816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.635 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.515912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.515936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.516078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.516099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.516215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.516236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.516448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.516468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.516636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.516651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.516740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.516754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.516838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.516853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.517004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.517019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.517098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.517112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.517197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.517217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.517323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.517340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.517414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.517428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.517560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.517575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.517656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.517670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.517740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.517754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.517826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.517839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.517927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.517941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.518021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.518036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.518112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.518126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.518191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.518211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.518359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.518375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.518452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.518466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.518546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.518559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.518625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.518639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.518798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.518819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.518928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.518949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.519039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.519060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.519361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.519433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.519749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.519787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.519977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.520011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.520248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.520282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.520463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.520496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.520671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.520705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.520829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.520861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.521075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.521108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.521231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.521260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.521486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.521507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.521676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.521699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.521863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.521885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.522105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.636 [2024-11-20 08:27:37.522128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.636 qpair failed and we were unable to recover it. 00:30:23.636 [2024-11-20 08:27:37.522343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.522370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.522550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.522566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.522717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.522732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.522817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.522832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.522995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.523010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.523181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.523196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.523277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.523292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.523359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.523373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.523450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.523465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.523608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.523623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.523777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.523792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.523861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.523874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.524028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.524051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.524161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.524182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.524307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.524329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.524409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.524430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.524607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.524629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.524788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.524806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.524892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.524907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.525086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.525101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.525177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.525191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.525278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.525292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.525473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.525489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.525559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.525573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.525778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.525793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.525949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.525964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.526130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.526145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.526289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.526311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.526414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.526435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.526587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.526608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.526786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.526816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.526989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.527017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.527143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.527169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.527288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.527318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.527547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.527574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.527697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.527724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.527824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.527851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.528075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.528103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.528272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.528303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.528464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.528491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.528740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.637 [2024-11-20 08:27:37.528773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.637 qpair failed and we were unable to recover it. 00:30:23.637 [2024-11-20 08:27:37.528881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.528908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.529018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.529043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.529228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.529256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.529539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.529568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.529797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.529824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.529937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.529964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.530076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.530102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.530358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.530387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.530507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.530534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.530774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.530801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.530923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.530950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.531120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.531147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.531418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.531448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.531566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.531594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.531762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.531788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.531960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.531988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.532107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.532134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.532242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.532269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.532386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.532412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.532531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.532558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.532746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.532774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.532932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.532960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.533059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.533086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.533259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.533288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.533535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.533562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.533667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.533696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.533877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.533905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.534011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.534038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.534194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.534230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.534393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.534421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.534514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.534541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.534630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.534655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.534836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.534864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.535025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.535051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.535180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.535241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.535406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.535434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.535555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.535581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.535745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.535772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.535881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.638 [2024-11-20 08:27:37.535908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.638 qpair failed and we were unable to recover it. 00:30:23.638 [2024-11-20 08:27:37.536088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.536123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.536247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.536275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.536465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.536491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.536655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.536683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.536907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.536926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.537159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.537179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.537287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.537307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.537445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.537465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.537627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.537647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.537825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.537839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.537989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.538002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.538087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.538100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.538250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.538264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.538324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.538336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.538485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.538498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.538629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.538643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.538837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.538852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.538928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.538940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.539081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.539094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.539239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.539253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.539408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.539427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.539591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.539609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.539702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.539721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.539871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.539890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.540119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.540138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.540226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.540244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.540399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.540418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.540625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.540639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.540707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.540724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.639 [2024-11-20 08:27:37.540797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.639 [2024-11-20 08:27:37.540810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.639 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.540977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.540991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.541121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.541134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.541224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.541238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.541386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.541400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.541549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.541562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.541630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.541643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.541724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.541737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.541798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.541811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.541943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.541957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.542111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.542131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.542275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.542295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.542487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.542507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.542682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.542699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.542785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.542799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.542934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.542947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.543164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.543178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.543243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.543256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.543452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.543465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.543550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.543562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.543696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.543710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.543860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.543873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.544029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.544042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.544184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.544197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.544347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.544367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.544473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.544492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.544645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.544665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.544739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.544756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.544902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.544922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.545096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.545116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.545219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.545238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.545340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.545360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.545524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.545544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.545762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.545783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.545937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.545956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.546165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.546185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.546365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.546383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.546463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.546476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.640 qpair failed and we were unable to recover it. 00:30:23.640 [2024-11-20 08:27:37.546555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.640 [2024-11-20 08:27:37.546574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.546817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.546831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.547032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.547046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.547111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.547123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.547210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.547225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.547307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.547323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.547465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.547480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.547624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.547639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.547787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.547802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.547893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.547914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.548011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.548032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.548179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.548205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.548366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.548388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.548489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.548512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.548608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.548633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.548744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.548767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.548957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.548978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.549193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.549226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.549297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.549310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.549536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.549551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.549638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.549653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.549722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.549736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.549816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.549831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.549909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.549924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.550064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.550079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.550157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.550171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.550323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.550340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.550432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.550448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.550592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.550608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.550743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.550763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.550920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.550940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.551046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.551067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.551160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.551180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.551289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.551311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.551405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.551424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.551585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.551609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.551701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.551718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.551857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.641 [2024-11-20 08:27:37.551873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.641 qpair failed and we were unable to recover it. 00:30:23.641 [2024-11-20 08:27:37.551945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.551959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.552025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.552039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.552200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.552224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.552321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.552334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.552431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.552446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.552525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.552538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.552679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.552693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.552857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.552872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.553010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.553026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.553226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.553249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.553346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.553367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.553518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.553539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.553692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.553714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.553819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.553839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.553937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.553958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.554129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.554147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.554234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.554250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.554398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.554414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.554552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.554567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.554710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.554725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.554809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.554824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.555050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.555064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.555172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.555188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.555339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.555355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.555442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.555457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.555530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.555544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.555650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.555673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.555782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.555803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.556022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.556044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.556222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.556246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.556344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.556365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.556520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.556537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.556682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.556697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.556780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.556796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.557024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.557041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.557310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.557332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.557483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.557503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.642 [2024-11-20 08:27:37.557662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.642 [2024-11-20 08:27:37.557682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.642 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.557838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.557857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.557942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.557965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.558055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.558083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.558247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.558276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.558523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.558557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.558658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.558685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.558984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.559012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.559140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.559171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.559292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.559321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.559520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.559547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.559669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.559698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.559878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.559906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.560134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.560162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.560275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.560306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.560538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.560566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.560778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.560807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.560928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.560957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.561133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.561160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.561398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.561424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.561593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.561615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.561849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.561869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.561970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.561990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.562090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.562109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.562194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.562246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.562402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.562423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.562571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.562592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.562684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.562705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.562794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.562813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.562890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.562907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.563008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.563033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.563264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.563293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.563407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.643 [2024-11-20 08:27:37.563436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.643 qpair failed and we were unable to recover it. 00:30:23.643 [2024-11-20 08:27:37.563542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.563571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.563784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.563815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.563928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.563956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.564065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.564093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.564194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.564230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.564395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.564418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.564587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.564607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.564690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.564709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.564812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.564831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.564918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.564936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.565080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.565100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.565206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.565227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.565382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.565406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.565509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.565529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.565683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.565703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.565852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.565872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.566042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.566070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.566248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.566278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.566445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.566473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.566649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.566678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.566840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.566869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.566993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.567020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.567184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.567224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.567345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.567374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.567611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.567639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.567736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.567765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.568041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.568072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.568261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.568292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.568474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.568508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.568682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.568714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.568821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.568851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.644 [2024-11-20 08:27:37.569028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.644 [2024-11-20 08:27:37.569060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.644 qpair failed and we were unable to recover it. 00:30:23.959 [2024-11-20 08:27:37.569324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.959 [2024-11-20 08:27:37.569357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.959 qpair failed and we were unable to recover it. 00:30:23.959 [2024-11-20 08:27:37.569545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.959 [2024-11-20 08:27:37.569578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.959 qpair failed and we were unable to recover it. 00:30:23.959 [2024-11-20 08:27:37.569790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.959 [2024-11-20 08:27:37.569823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.959 qpair failed and we were unable to recover it. 00:30:23.959 [2024-11-20 08:27:37.569947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.959 [2024-11-20 08:27:37.569978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.959 qpair failed and we were unable to recover it. 00:30:23.959 [2024-11-20 08:27:37.570155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.959 [2024-11-20 08:27:37.570186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.959 qpair failed and we were unable to recover it. 00:30:23.959 [2024-11-20 08:27:37.570298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.959 [2024-11-20 08:27:37.570330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.959 qpair failed and we were unable to recover it. 00:30:23.959 [2024-11-20 08:27:37.570517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.959 [2024-11-20 08:27:37.570549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.959 qpair failed and we were unable to recover it. 00:30:23.959 [2024-11-20 08:27:37.570661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.959 [2024-11-20 08:27:37.570693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.959 qpair failed and we were unable to recover it. 00:30:23.959 [2024-11-20 08:27:37.570823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.959 [2024-11-20 08:27:37.570854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.959 qpair failed and we were unable to recover it. 00:30:23.959 [2024-11-20 08:27:37.571059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.959 [2024-11-20 08:27:37.571091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.959 qpair failed and we were unable to recover it. 00:30:23.959 [2024-11-20 08:27:37.571264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.959 [2024-11-20 08:27:37.571296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.959 qpair failed and we were unable to recover it. 00:30:23.959 [2024-11-20 08:27:37.571410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.959 [2024-11-20 08:27:37.571441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.959 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.571550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.571581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.571679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.571709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.571920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.571952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.572190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.572231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.572488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.572521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.572693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.572725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.572927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.572958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.573139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.573171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.573364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.573403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.573592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.573625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.573802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.573834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.574011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.574043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.574146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.574176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.574377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.574410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.574522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.574553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.574668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.574700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.574872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.574902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.575006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.575039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.575218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.575251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.575353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.575384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.575509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.575540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.575642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.575674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.575944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.575976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.576082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.576112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.576222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.576254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.576373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.576406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.576660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.576693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.576903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.576935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.577119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.577152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.577338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.960 [2024-11-20 08:27:37.577371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.960 qpair failed and we were unable to recover it. 00:30:23.960 [2024-11-20 08:27:37.577541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.577572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.577696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.577728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.577926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.577956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.578194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.578234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.578365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.578398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.578517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.578538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.578685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.578706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.578861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.578880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.578960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.578974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.579175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.579190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.579354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.579371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.579447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.579461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.579541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.579554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.579687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.579703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.579769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.579782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.579859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.579873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.579955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.579969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.580099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.580114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.580188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.580212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.580345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.580359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.580502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.580522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.580604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.580623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.580698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.580717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.580812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.580833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.580981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.581002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.581105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.581126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.581289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.581311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.581461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.581478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.961 qpair failed and we were unable to recover it. 00:30:23.961 [2024-11-20 08:27:37.581564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.961 [2024-11-20 08:27:37.581578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.581664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.581680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.581747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.581760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.581832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.581846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.581991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.582007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.582073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.582087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.582239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.582254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.582410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.582425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.582509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.582523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.582601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.582615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.582760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.582775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.582846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.582860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.582999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.583020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.583165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.583186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.583343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.583364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.583473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.583494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.583649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.583670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.583824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.583843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.583976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.583992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.584144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.584159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.584225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.584239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.584305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.584319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.584517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.584533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.584600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.584613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.584692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.584706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.584852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.584867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.585010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.585025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.585098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.585112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.585366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.585386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.585482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.585503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.585672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.585698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.585860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.585881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.586041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.586062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.962 qpair failed and we were unable to recover it. 00:30:23.962 [2024-11-20 08:27:37.586281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.962 [2024-11-20 08:27:37.586300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.586399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.586414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.586500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.586513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.586582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.586596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.586731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.586745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.586829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.586842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.586985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.587000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.587132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.587147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.587231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.587245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.587454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.587470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.587603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.587618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.587761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.587782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.587933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.587954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.588047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.588067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.588252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.588281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.588400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.588425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.588514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.588539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.588627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.588652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.588899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.588927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.589041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.589066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.589156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.589179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.589413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.589441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.589560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.589586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.589756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.589783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.589967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.589993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.590100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.590125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.590244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.590271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.590430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.590457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.590618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.590644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.963 [2024-11-20 08:27:37.590904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.963 [2024-11-20 08:27:37.590931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.963 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.591090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.591116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.591217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.591244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.591345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.591371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.591560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.591586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.591752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.591779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.591946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.591973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.592078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.592103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.592319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.592353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.592469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.592495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.592615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.592641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.592910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.592937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.593045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.593071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.593186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.593216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.593395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.593422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.593524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.593550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.593724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.593749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.593943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.593970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.594074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.594099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.594270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.594297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.594472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.594498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.594673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.594698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.594856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.594882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.594996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.595022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.595253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.595280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.595476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.595503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.595611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.595637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.595730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.595756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.595851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.595876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.596103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.596131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.596242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.596269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.596364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.596390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.596548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.596575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.596828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.596854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.596952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.596977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.964 [2024-11-20 08:27:37.597143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.964 [2024-11-20 08:27:37.597171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.964 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.597280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.597307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.597467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.597492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.597650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.597676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.597868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.597893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.598010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.598035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.598166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.598193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.598465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.598493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.598740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.598767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.598932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.598951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.599038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.599057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.599211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.599230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.599391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.599410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.599557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.599580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.599657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.599673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.599752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.599770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.599913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.599933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.600005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.600021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.600111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.600129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.600218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.600237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.600393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.600411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.600555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.600572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.600728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.600747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.600971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.600990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.965 [2024-11-20 08:27:37.601249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.965 [2024-11-20 08:27:37.601269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.965 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.601366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.601384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.601537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.601555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.601710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.601729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.601869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.601886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.601975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.601993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.602067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.602083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.602185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.602207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.602293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.602310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.602417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.602434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.602648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.602667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.602826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.602845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.602984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.603003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.603098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.603116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.603198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.603226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.603310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.603328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.603482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.603500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.603642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.603658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.603745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.603758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.603892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.603905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.603986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.603998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.604060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.604072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.604138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.604150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.604306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.604321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.604518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.604531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.604595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.604607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.604737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.604751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.604826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.604837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.604915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.604927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.605010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.605026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.605189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.605205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.605282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.605298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.605369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.605387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.605530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.605547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.605692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.605711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.605854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.605872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.605947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.605964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.966 [2024-11-20 08:27:37.606058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.966 [2024-11-20 08:27:37.606076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.966 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.606231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.606250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.606395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.606414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.606563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.606581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.606669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.606683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.606815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.606828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.606906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.606918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.606976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.606987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.607075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.607088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.607218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.607231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.607363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.607376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.607456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.607469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.607529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.607541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.607671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.607684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.607747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.607759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.607892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.607905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.607987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.607999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.608076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.608088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.608221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.608239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.608420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.608483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.608749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.608820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.609030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.609068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.609196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.609247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.609424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.609458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.609566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.609599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.609722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.609756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.609863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.609896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.610061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.610084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.610295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.610314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.610400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.610414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.610559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.610576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.610760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.967 [2024-11-20 08:27:37.610777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.967 qpair failed and we were unable to recover it. 00:30:23.967 [2024-11-20 08:27:37.610927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.610947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.611111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.611129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.611285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.611316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.611466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.611483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.611701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.611720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.611800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.611815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.611952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.611970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.612110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.612127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.612266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.612285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.612477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.612494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.612581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.612598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.612810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.612829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.612931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.612947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.613116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.613134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.613217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.613236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.613487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.613504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.613680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.613695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.613902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.613920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.614011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.614028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.614112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.614129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.614272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.614291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.614458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.614475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.614695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.614713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.614946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.614964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.615170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.615188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.615416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.615433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.615589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.615606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.615754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.968 [2024-11-20 08:27:37.615772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.968 qpair failed and we were unable to recover it. 00:30:23.968 [2024-11-20 08:27:37.615993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.616009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.616237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.616257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.616396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.616414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.616565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.616583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.616687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.616704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.616865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.616882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.616969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.616987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.617078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.617096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.617183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.617200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.617278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.617294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.617428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.617446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.617544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.617562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.617720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.617740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.617877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.617893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.617953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.617965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.618087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.618099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.618173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.618184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.618401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.618414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.618555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.618567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.618640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.618650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.618779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.618791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.618866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.618876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.619007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.619019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.619148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.619160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.619228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.619240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.619375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.619388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.619532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.619549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.619636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.619653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.619736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.619752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.619828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.619845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.969 qpair failed and we were unable to recover it. 00:30:23.969 [2024-11-20 08:27:37.619986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.969 [2024-11-20 08:27:37.620002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.620083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.620097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.620246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.620260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.620336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.620347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.620485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.620497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.620570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.620580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.620652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.620663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.620786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.620798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.620919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.620930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.621056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.621070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.621139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.621150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.621287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.621299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.621370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.621380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.621557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.621569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.621715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.621733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.621889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.621906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.622054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.622072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.622229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.622249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.622391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.622408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.622599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.622616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.622761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.622777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.622993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.623011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.970 qpair failed and we were unable to recover it. 00:30:23.970 [2024-11-20 08:27:37.623164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.970 [2024-11-20 08:27:37.623178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.623324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.623337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.623399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.623409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.623471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.623481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.623561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.623571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.623723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.623735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.623814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.623825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.624043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.624054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.624213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.624226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.624316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.624328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.624399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.624409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.624491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.624503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.624643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.624659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.624749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.624765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.624848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.624865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.624947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.624964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.625172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.625189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.625271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.625286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.625495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.625513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.625597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.625610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.625691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.625703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.625776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.625788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.625916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.625928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.626005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.626016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.626072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.626082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.626147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.626158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.626289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.626302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.626383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.626400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.971 [2024-11-20 08:27:37.626529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.971 [2024-11-20 08:27:37.626541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.971 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.626613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.626624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.626751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.626762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.626848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.626859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.626933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.626944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.627024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.627034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.627101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.627116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.627195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.627216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.627289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.627305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.627466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.627484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.627636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.627652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.627796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.627814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.627904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.627920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.628080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.628097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.628173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.628188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.628356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.628374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.628579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.628595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.628681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.628697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.628849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.628866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.629022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.629040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.629118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.629134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.629229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.629247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.629396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.629414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.629556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.629573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.629749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.629770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.629848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.629868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.629964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.629986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.630085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.630105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.630206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.630228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.630309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.630332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.630493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.972 [2024-11-20 08:27:37.630509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.972 qpair failed and we were unable to recover it. 00:30:23.972 [2024-11-20 08:27:37.630605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.630620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.630699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.630714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.630848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.630862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.630947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.630962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.631099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.631113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.631184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.631198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.631274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.631290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.631389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.631404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.631546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.631565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.631648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.631664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.631802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.631817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.631889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.631906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.631986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.632007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.632096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.632117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.632331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.632354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.632513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.632534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.632630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.632651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.632732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.632754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.632910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.632932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.633077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.633099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.633261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.633280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.633353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.633366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.633448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.633463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.633554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.633569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.633735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.633751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.633899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.633914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.633978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.633992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.634077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.634091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.634184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.634199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.973 qpair failed and we were unable to recover it. 00:30:23.973 [2024-11-20 08:27:37.634356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.973 [2024-11-20 08:27:37.634372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.634439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.634454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.634598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.634612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.634761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.634784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.634879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.634900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.635011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.635032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.635131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.635152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.635333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.635355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.635528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.635551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.635698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.635715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.635867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.635883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.635983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.635999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.636076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.636091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.636298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.636313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.636449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.636464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.636558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.636573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.636774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.636789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.636868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.636883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.636971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.636986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.637075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.637099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.637342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.637365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.637530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.637552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.637646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.637667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.637758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.637779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.637946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.637965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.638100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.638116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.638257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.638273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.638350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.638365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.638444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.638460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.638606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.638621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.638690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.638706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.974 [2024-11-20 08:27:37.638779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.974 [2024-11-20 08:27:37.638795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.974 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.638872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.638888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.639048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.639063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.639221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.639236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.639305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.639321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.639454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.639474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.639592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.639619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.639741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.639767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.639893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.639919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.640008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.640035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.640138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.640165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.640347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.640373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.640625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.640654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.640745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.640771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.640884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.640911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.641117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.641145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.641314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.641342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.641509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.641536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.641706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.641734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.641908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.641935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.642107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.642133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.642347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.642376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.642503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.642530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.642656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.642683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.642790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.642818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.642939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.642967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.643146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.643173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.643350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.643379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.975 [2024-11-20 08:27:37.643505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.975 [2024-11-20 08:27:37.643538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.975 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.643715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.643742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.643844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.643871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.644155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.644184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.644305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.644332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.644433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.644459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.644622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.644651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.644875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.644903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.645132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.645161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.645279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.645307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.645464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.645491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.645653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.645681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.645851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.645878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.646053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.646080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.646317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.646348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.646465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.646492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.646595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.646622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.646790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.646819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.646912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.646939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.647049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.647074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.647176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.647208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.647372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.647401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.647574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.647601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.647810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.647838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.648011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.648039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.648216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.648244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.976 [2024-11-20 08:27:37.648360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.976 [2024-11-20 08:27:37.648387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.976 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.648483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.648510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.648608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.648634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.648841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.648868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.649065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.649093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.649252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.649279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.649460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.649477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.649576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.649590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.649662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.649675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.649842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.649855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.649933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.649946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.650101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.650114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.650183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.650195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.650266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.650280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.650411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.650427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.650561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.650574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.650640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.650652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.650784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.650807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.650868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.650880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.651084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.651097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.651173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.651191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.651275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.651293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.651447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.651466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.651622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.651642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.651798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.651817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.651973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.651989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.652060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.652073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.652221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.652236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.652319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.652333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.652483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.652498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.652683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.652696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.652792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.652806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.977 [2024-11-20 08:27:37.652936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.977 [2024-11-20 08:27:37.652950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.977 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.653016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.653028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.653179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.653193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.653326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.653339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.653421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.653440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.653648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.653668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.653808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.653828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.653923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.653942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.654104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.654123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.654225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.654242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.654328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.654342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.654533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.654546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.654639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.654652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.654796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.654809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.654877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.654890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.655038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.655051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.655137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.655150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.655308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.655323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.655467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.655481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.655614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.655628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.655771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.655792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.655948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.655967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.656222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.656252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.656396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.656415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.656574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.656590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.656784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.656798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.656995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.657009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.657226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.657240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.657302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.657315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.657408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.657422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.978 qpair failed and we were unable to recover it. 00:30:23.978 [2024-11-20 08:27:37.657571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.978 [2024-11-20 08:27:37.657585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.657670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.657685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.657923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.657937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.658017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.658037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.658119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.658138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.658317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.658337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.658445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.658464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.658572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.658591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.658768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.658786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.658870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.658884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.659086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.659101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.659233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.659247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.659322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.659335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.659460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.659488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.659645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.659662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.659790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.659807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.659957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.659975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.660071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.660088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.660240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.660258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.660363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.660388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.660541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.660565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.660694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.660719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.660910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.660936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.661107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.661131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.661413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.661435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.661600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.661619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.661761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.661779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.661850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.979 [2024-11-20 08:27:37.661867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.979 qpair failed and we were unable to recover it. 00:30:23.979 [2024-11-20 08:27:37.662025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.662042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.662191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.662221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.662311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.662329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.662427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.662444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.662598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.662620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.662775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.662794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.662946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.662971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.663082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.663106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.663283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.663307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.663410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.663434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.663586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.663611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.663807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.663828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.663932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.663950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.664036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.664053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.664224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.664242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.664387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.664405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.664514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.664531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.664717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.664735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.664817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.664834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.665047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.665064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.665158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.665176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.665251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.665267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.665340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.665362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.665465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.665490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.665588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.665612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.665783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.665807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.665988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.666014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.666169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.666194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.666335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.666362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.980 [2024-11-20 08:27:37.666508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.980 [2024-11-20 08:27:37.666526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.980 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.666609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.666626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.666777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.666794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.666935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.666953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.667119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.667137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.667230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.667248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.667329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.667346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.667450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.667467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.667612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.667628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.667831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.667847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.668017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.668042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.668149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.668174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.668395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.668421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.668591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.668616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.668838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.668859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.668947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.668967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.669130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.669147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.669228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.669256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.669414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.669433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.669614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.669637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.669803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.669825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.669928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.669951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.670047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.670069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.670333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.670357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.670452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.670483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.670687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.670719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.670949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.670981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.671222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.671257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.671555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.671587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.981 [2024-11-20 08:27:37.671774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.981 [2024-11-20 08:27:37.671808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.981 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.672000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.672032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.672292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.672325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.672444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.672477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.672652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.672684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.672856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.672890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.673014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.673046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.673248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.673275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.673394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.673416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.673587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.673610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.673728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.673751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.673856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.673879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.673974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.673997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.674244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.674269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.674375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.674398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.674619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.674643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.674821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.674850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.675041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.675074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.675258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.675292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.675473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.675505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.675676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.675708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.675950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.675982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.676151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.676186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.676455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.676487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.676682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.676715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.676913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.676946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.677065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.677102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.677296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.677330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.982 [2024-11-20 08:27:37.677581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.982 [2024-11-20 08:27:37.677615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.982 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.677791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.677825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.678062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.678094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.678223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.678257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.678499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.678533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.678803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.678835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.679070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.679096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.679251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.679275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.679445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.679468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.679577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.679600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.679822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.679845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.679943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.679966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.680164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.680187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.680417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.680442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.680550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.680578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.680765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.680798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.680919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.680951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.681059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.681101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.681296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.681331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.681594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.681627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.681740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.681771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.682035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.682069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.682252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.682286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.682466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.682498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.682749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.682783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.683011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.683044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.683284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.683318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.683441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.683466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.683635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.683658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.683815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.683837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.683947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.683970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.983 [2024-11-20 08:27:37.684064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.983 [2024-11-20 08:27:37.684087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.983 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.684252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.684275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.684519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.684542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.684714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.684736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.684939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.684962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.685090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.685123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.685346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.685380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.685649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.685687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.685873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.685905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.686016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.686048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.686288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.686322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.686520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.686554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.686728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.686760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.686930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.686962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.687156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.687189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.687383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.687415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.687675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.687708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.687961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.687987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.688217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.688241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.688340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.688363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.688548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.688571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.688871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.688894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.689057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.689087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.689246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.689270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.689515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.689545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.689743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.689780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.689961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.689998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.690127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.690164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.690466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.690508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.690707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.690744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.690991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.691029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.691234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.691265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.691454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.691481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.691677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.691703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.691947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.691973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.692148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.692174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.984 [2024-11-20 08:27:37.692350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.984 [2024-11-20 08:27:37.692376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.984 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.692491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.692517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.692611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.692637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.692738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.692766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.692948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.692985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.693112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.693149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.693357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.693396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.693522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.693559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.693833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.693872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.694006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.694044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.694217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.694254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.694467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.694518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.694707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.694744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.694855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.694901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.695088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.695124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.695353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.695392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.695527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.695569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.695773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.695817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.696067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.696107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.696366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.696406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.696621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.696657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.696799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.696837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.697031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.697070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.697251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.697289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.697561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.697599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.697809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.697848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.698148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.698186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.698425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.698462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.698607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.698645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.698767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.698803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.699003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.985 [2024-11-20 08:27:37.699039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.985 qpair failed and we were unable to recover it. 00:30:23.985 [2024-11-20 08:27:37.699284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.699324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.699575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.699606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.699714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.699740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.699876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.699897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.700050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.700071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.700258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.700280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.700362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.700383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.700624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.700646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.700735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.700756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.700918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.700940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.701104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.701134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.701326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.701358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.701543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.701574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.701744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.701775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.701943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.701973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.702184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.702221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.702333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.702365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.702486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.702518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.702690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.702720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.702950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.702981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.703153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.703191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.703406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.703438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.703696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.703727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.703993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.704024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.704126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.704155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.704414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.704447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.704549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.704588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.704776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.986 [2024-11-20 08:27:37.704808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.986 qpair failed and we were unable to recover it. 00:30:23.986 [2024-11-20 08:27:37.704976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.705008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.705200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.705241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.705408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.705439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.705543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.705573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.705753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.705784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.705909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.705939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.706047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.706078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.706265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.706298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.706406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.706437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.706600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.706631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.706869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.706900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.707014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.707044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.707290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.707321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.707529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.707561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.707755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.707785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.707975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.708005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.708125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.708155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.708338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.708370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.708562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.708593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.708779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.708815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.708991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.709022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.709141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.709172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.709410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.709443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.709628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.709652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.709815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.709832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.710075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.710092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.710243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.710262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.710426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.710443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.987 qpair failed and we were unable to recover it. 00:30:23.987 [2024-11-20 08:27:37.710626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.987 [2024-11-20 08:27:37.710643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.710743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.710759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.710916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.710933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.711082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.711098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.711346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.711373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.711554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.711579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.711821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.711846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.712015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.712039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.712145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.712164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.712238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.712256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.712419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.712436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.712574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.712591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.712758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.712775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.713006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.713022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.713118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.713135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.713291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.713308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.713456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.713473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.713624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.713652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.713821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.713846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.714018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.714041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.714159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.714184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.714358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.714382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.714558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.714582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.714682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.714701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.714936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.714953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.715064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.715080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.715233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.715251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.715386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.715403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.715502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.715519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.715658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.715675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.715754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.988 [2024-11-20 08:27:37.715771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.988 qpair failed and we were unable to recover it. 00:30:23.988 [2024-11-20 08:27:37.715934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.715955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.716027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.716044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.716118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.716133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.716278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.716304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.716410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.716434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.716587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.716611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.716715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.716747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.716859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.716882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.717052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.717076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.717319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.717341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.717522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.717540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.717679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.717696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.717837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.717854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.717945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.717966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.718111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.718133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.718239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.718257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.718406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.718423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.718580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.718597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.718696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.718713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.718813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.718836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.718989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.719013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.719109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.719134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.719289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.719313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.719492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.719516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.719683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.719708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.719808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.719832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.719924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.719941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.720139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.720156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.720252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.720270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.720355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.720373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.720465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.989 [2024-11-20 08:27:37.720481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.989 qpair failed and we were unable to recover it. 00:30:23.989 [2024-11-20 08:27:37.720685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.720708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.720869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.720892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.721041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.721064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.721310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.721333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.721461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.721493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.721678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.721710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.721882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.721913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.722105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.722139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.722258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.722290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.722482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.722520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.722783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.722818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.723064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.723096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.723283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.723318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.723437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.723471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.723653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.723686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.723933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.723965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.724234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.724270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.724486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.724519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.724694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.724726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.724992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.725025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.725160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.725192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.725391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.725423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.725599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.725633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.725811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.725844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.726083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.726115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.726390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.726424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.726540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.726572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.726701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.726734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.726913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.726946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.727160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.727193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.727446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.727480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.727678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.990 [2024-11-20 08:27:37.727712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.990 qpair failed and we were unable to recover it. 00:30:23.990 [2024-11-20 08:27:37.727896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.727928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.728175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.728220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.728409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.728443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.728624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.728657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.728922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.728956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.729075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.729107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.729290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.729324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.729505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.729540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.729804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.729830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.730021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.730045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.730160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.730183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.730432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.730456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.730621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.730644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.730747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.730770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.730967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.730986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.731096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.731116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.731209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.731229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.731445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.731478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.731597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.731625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.731722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.731750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.731856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.731884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.732056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.732084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.732216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.732245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.732407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.732434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.732544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.732573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.732738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.732765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.732927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.732954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.733111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.733140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.733311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.733354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.733483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.733512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.733685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.991 [2024-11-20 08:27:37.733723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.991 qpair failed and we were unable to recover it. 00:30:23.991 [2024-11-20 08:27:37.733975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.734003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.734188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.734223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.734469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.734490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.734713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.734732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.734910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.734929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.735086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.735105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.735193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.735220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.735461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.735481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.735640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.735659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.735897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.735917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.736098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.736117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.736278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.736298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.736566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.736586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.736692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.736712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.736904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.736924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.737136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.737155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.737332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.737352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.737449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.737480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.737642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.737661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.737745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.737765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.737939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.737958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.738047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.738066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.738155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.738177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.738354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.738373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.738581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.738600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.738773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.738793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.738968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.738992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.739131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.739151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.739375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.739398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.739631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.739651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.992 qpair failed and we were unable to recover it. 00:30:23.992 [2024-11-20 08:27:37.739822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.992 [2024-11-20 08:27:37.739843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.739936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.739954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.740102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.740122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.740294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.740315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.740469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.740489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.740584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.740603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.740823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.740844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.740996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.741016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.741245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.741272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.741429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.741456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.741572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.741599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.741712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.741739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.741910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.741936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.742118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.742144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.742306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.742333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.742442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.742467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.742561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.742587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.742696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.742723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.742880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.742907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.743146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.743173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.743280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.743306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.743574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.743601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.743761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.743788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.743901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.743928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.744105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.744132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.744406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.744434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.744591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.744617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.744776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.744803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.993 [2024-11-20 08:27:37.744961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.993 [2024-11-20 08:27:37.744988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.993 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.745086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.745110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.745336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.745363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.745526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.745553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.745722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.745748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.745865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.745891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.746048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.746074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.746176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.746216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.746400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.746431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.746626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.746652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.746821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.746848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.746968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.746995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.747171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.747198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.747313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.747340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.747433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.747460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.747554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.747578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.747736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.747763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.747988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.748014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.748169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.748195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.748362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.748389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.748552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.748579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.748775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.748801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.748976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.749003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.749194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.749228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.749418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.749444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.749638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.749666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.749776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.749801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.749986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.750013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.750175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.750200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.750330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.750357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.750581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.750607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.750765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.750792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.750894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.750920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.751013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.751038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.994 qpair failed and we were unable to recover it. 00:30:23.994 [2024-11-20 08:27:37.751304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.994 [2024-11-20 08:27:37.751333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.751464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.751498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.751738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.751772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.751878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.751912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.752096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.752130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.752393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.752429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.752620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.752654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.752896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.752930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.753052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.753086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.753337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.753372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.753491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.753524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.753761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.753795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.754052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.754086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.754326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.754361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.754620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.754659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.754850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.754885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.755174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.755218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.755352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.755386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.755509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.755543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.755749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.755782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.756021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.756056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.756230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.756266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.756440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.756473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.756727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.756761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.756880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.756915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.757121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.757154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.757352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.757386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.757573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.757608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.757738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.995 [2024-11-20 08:27:37.757772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.995 qpair failed and we were unable to recover it. 00:30:23.995 [2024-11-20 08:27:37.757950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.757983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.758113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.758147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.758404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.758438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.758611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.758646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.758832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.758866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.759046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.759079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.759292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.759327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.759565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.759598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.759788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.759823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.760020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.760053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.760165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.760199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.760384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.760418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.760598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.760630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.760827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.760861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.761050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.761084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.761256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.761292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.761474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.761508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.761698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.761731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.761970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.762004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.762200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.762264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.762446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.762480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.762715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.762749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.762873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.762907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.763087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.763120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.763297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.763332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.763516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.763555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.763675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.763709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.763882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.763915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.764175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.764220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.764437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.764471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.996 qpair failed and we were unable to recover it. 00:30:23.996 [2024-11-20 08:27:37.764707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.996 [2024-11-20 08:27:37.764741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.764948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.764981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.765153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.765187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.765413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.765447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.765689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.765722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.765858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.765892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.766033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.766067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.766270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.766306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.766440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.766473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.766684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.766718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.766955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.766989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.767106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.767139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.767265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.767301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.767412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.767444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.767630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.767663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.767859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.767894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.768017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.768051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.768226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.768261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.768401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.768434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.768633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.768666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.768905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.768939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.769180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.769220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.769495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.769529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.769787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.769820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.770028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.770061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.770253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.770288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.770529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.770562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.770802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.770835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.771019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.771052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.771314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.771348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.771470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.771504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.771701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.997 [2024-11-20 08:27:37.771734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.997 qpair failed and we were unable to recover it. 00:30:23.997 [2024-11-20 08:27:37.771844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.771877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.772070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.772104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.772289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.772324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.772434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.772473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.772667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.772701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.772896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.772929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.773055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.773090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.773262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.773298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.773401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.773434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.773565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.773598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.773733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.773767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.773888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.773922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.774096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.774129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.774324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.774359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.774535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.774569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.774739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.774773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.774963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.774997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.775130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.775164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.775357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.775391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.775666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.775699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.775825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.775859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.776051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.776085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.776352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.776386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.776517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.776552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.776733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.776767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.776949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.776982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.777092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.777125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.777304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.777340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.777559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.777592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.998 [2024-11-20 08:27:37.777785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.998 [2024-11-20 08:27:37.777818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.998 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.778006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.778040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.778224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.778259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.778437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.778469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.778658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.778692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.778874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.778907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.779151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.779185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.779443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.779477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.779648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.779681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.779811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.779844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.780025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.780058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.780240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.780276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.780460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.780493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.780666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.780699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.780891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.780930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.781035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.781069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.781265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.781300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.781445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.781479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.781737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.781770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.782007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.782041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.782286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.782321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.782570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.782603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.782727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.782761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.782951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.782984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.783256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.783291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.783421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.783455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.783639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.783673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.783843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.783878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.784058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.784092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.784263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.784299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.784430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.784463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.784708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.784743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:23.999 [2024-11-20 08:27:37.784940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:23.999 [2024-11-20 08:27:37.784974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:23.999 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.785237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.785272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.785385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.785419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.785536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.785570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.785754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.785788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.786053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.786086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.786280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.786315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.786606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.786639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.786883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.786916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.787097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.787132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.787241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.787275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.787566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.787599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.787842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.787876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.788131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.788163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.788413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.788448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.788709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.788743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.788928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.788961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.789134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.789167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.789318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.789353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.789491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.789524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.789788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.789821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.789949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.789983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.790163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.790211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.790357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.790390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.790584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.790618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.790802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.790835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.790959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.790993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.791255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.791291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.791410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.791443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.791563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.791597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.791808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.791841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.000 qpair failed and we were unable to recover it. 00:30:24.000 [2024-11-20 08:27:37.792039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.000 [2024-11-20 08:27:37.792073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.792280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.792315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.792511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.792544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.792724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.792757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.792982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.793017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.793219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.793253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.793436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.793470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.793683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.793716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.793977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.794010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.794195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.794257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.794466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.794500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.794707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.794740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.794861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.794894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.795137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.795171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.795424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.795459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.795694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.795728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.795850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.795884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.796069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.796103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.796226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.796262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.796400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.796432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.796607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.796640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.796749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.796783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.797055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.797088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.797211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.797246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.797358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.797392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.797512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.797546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.797690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.797723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.797833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.797867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.798060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.798093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.798262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.798297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.798509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.798542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.798679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.798717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.798950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.798984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.799158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.799191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.799397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.799431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.799640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.001 [2024-11-20 08:27:37.799673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.001 qpair failed and we were unable to recover it. 00:30:24.001 [2024-11-20 08:27:37.799842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.799876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.799987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.800019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.800137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.800171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.800385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.800420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.800609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.800642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.800759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.800792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.800918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.800953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.801062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.801096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.801223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.801259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.801469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.801503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.801758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.801793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.801977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.802010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.802219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.802253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.802375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.802409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.802547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.802580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.802769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.802802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.802994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.803027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.803152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.803186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.803331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.803365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.803477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.803510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.803626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.803659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.803916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.803951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.804133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.804167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.804282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.804316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.804581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.804615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.804852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.804885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.805015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.805048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.805229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.805264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.805506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.805539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.805670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.805704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.805944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.805977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.806154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.806188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.806318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.806351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.806468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.806501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.002 qpair failed and we were unable to recover it. 00:30:24.002 [2024-11-20 08:27:37.806674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.002 [2024-11-20 08:27:37.806706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.806903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.806942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.807065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.807098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.807291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.807325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.807521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.807555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.807752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.807785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.808076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.808109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.808302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.808337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.808601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.808634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.808812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.808845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.808963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.808996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.809179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.809222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.809433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.809466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.809758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.809790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.809911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.809944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.810076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.810110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.810240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.810275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.810398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.810432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.810550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.810584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.810757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.810790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.810912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.810945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.811084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.811118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.811297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.811332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.811531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.811565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.811685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.811718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.811829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.811863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.811983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.812016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.812191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.812235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.812431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.812464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.812677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.812710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.812915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.812948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.813245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.813280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.813468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.813503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.813696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.813729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.813917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.813951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.814142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.814175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.003 qpair failed and we were unable to recover it. 00:30:24.003 [2024-11-20 08:27:37.814426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.003 [2024-11-20 08:27:37.814460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.814633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.814666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.814853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.814887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.815079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.815111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.815223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.815258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.815445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.815479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.815590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.815624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.815831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.815864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.816136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.816170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.816370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.816404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.816525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.816557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.816724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.816758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.816928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.816962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.817060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.817094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.817367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.817403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.817605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.817639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.817772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.817806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.817930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.817962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.818146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.818179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.818335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.818370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.818644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.818677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.818791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.818824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.819035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.819067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.819276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.819311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.819493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.819526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.819667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.819701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.819873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.819908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.820091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.820125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.820244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.820279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.820452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.820486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.820665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.820698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.004 [2024-11-20 08:27:37.820886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.004 [2024-11-20 08:27:37.820921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.004 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.821178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.821228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.821432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.821465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.821598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.821631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.821905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.821939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.822123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.822156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.822418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.822453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.822666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.822700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.822909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.822941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.823128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.823162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.823298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.823333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.823516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.823549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.823654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.823687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.823879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.823912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.824087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.824120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.824384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.824419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.824607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.824640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.824868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.824902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.825082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.825117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.825233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.825268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.825386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.825419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.825593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.825626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.825812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.825846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.826034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.826067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.826245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.826280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.826388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.826422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.826670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.826702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.826892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.826925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.827059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.827094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.827221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.827255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.827444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.827477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.827683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.827717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.827908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.827940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.828129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.828162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.828346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.828380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.828575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.828609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.828733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.828765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.828952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.005 [2024-11-20 08:27:37.828986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.005 qpair failed and we were unable to recover it. 00:30:24.005 [2024-11-20 08:27:37.829125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.829159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.829410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.829445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.829616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.829650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.829830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.829869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.830048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.830082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.830280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.830315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.830509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.830542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.830671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.830704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.830824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.830857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.831040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.831074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.831190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.831242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.831424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.831457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.831566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.831597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.831835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.831868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.832065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.832099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.832317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.832351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.832454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.832487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.832709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.832743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.832856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.832889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.833094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.833127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.833313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.833348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.833531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.833565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.833810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.833843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.834101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.834135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.834267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.834302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.834436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.834469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.834734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.834768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.834955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.834989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.835175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.835217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.835326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.835359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.835629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.835663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.835851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.835884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.836073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.836107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.836226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.836260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.836384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.836417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.836609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.836642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.836907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.836940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.837059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.006 [2024-11-20 08:27:37.837093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.006 qpair failed and we were unable to recover it. 00:30:24.006 [2024-11-20 08:27:37.837236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.837272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.837506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.837543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.837828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.837862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.838045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.838079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.838200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.838257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.838447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.838486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.838606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.838639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.838758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.838791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.838910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.838944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.839119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.839153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.839421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.839456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.839628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.839662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.839835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.839868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.840127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.840160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.840362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.840397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.840516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.840549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.840676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.840710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.840924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.840958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.841135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.841168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.841362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.841396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.841585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.841620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.841822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.841855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.842039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.842073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.842245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.842280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.842452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.842486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.842672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.842707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.842884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.842917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.843120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.843153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.843356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.843390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.843677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.843710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.843916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.843950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.844237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.844272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.844413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.844446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.844707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.844741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.844874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.844908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.845091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.845124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.845338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.845373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.845502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.845536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.845705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.845738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.845876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.845910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.007 [2024-11-20 08:27:37.846031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.007 [2024-11-20 08:27:37.846064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.007 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.846357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.846393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.846513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.846547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.846657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.846691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.846826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.846859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.847057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.847095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.847222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.847258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.847380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.847414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.847585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.847618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.847821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.847854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.848060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.848094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.848296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.848330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.848463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.848497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.848739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.848772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.848977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.849010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.849269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.849304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.849425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.849459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.849568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.849601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.849843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.849876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.850066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.850100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.850272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.850320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.850503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.850536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.850776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.850809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.850995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.851029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.851133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.851166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.851298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.851333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.851534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.851568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.851683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.851716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.851955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.851988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.852178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.852218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.852390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.852423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.852541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.852574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.852767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.852801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.853071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.853104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.853292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.853326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.853528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.008 [2024-11-20 08:27:37.853561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.008 qpair failed and we were unable to recover it. 00:30:24.008 [2024-11-20 08:27:37.853764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.853796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.854077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.854110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.854253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.854288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.854468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.854502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.854717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.854750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.854887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.854920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.855024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.855057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.855239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.855274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.855408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.855441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.855688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.855727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.855843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.855876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.856007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.856041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.856171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.856211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.856340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.856373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.856613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.856648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.856844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.856876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.856992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.857025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.857227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.857262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.857370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.857402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.857590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.857624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.857875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.857909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.858027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.858061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.858346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.858382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.858572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.858606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.858798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.858832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.859015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.859048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.859183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.859226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.859410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.859444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.859571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.859604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.859776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.859810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.859984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.860018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.860192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.860234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.860441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.860475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.860688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.860722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.860838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.860872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.861085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.861118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.861310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.861347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.861536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.861568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.009 [2024-11-20 08:27:37.861816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.009 [2024-11-20 08:27:37.861850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.009 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.862087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.862122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.862239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.862274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.862408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.862442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.862655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.862688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.862808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.862839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.863008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.863039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.863328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.863363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.863500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.863533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.863783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.863817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.864054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.864087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.864266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.864306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.864487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.864520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.864646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.864680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.864856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.864889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.865155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.865188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.865390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.865424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.865545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.865578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.865752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.865785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.865990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.866023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.866231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.866266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.866370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.866403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.866540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.866573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.866757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.866790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.866964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.866998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.867179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.867224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.867415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.867448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.867650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.867683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.867922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.867956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.868061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.868094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.868221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.868256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.868522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.868555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.868764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.868798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.869035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.869069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.869186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.869228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.869492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.869526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.869670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.869703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.869889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.869922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.870057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.870091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.870222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.870257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.870367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.870401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.870602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.870635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.870754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.010 [2024-11-20 08:27:37.870787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.010 qpair failed and we were unable to recover it. 00:30:24.010 [2024-11-20 08:27:37.870960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.870994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.871256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.871291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.871534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.871568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.871789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.871823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.871993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.872027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.872208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.872243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.872449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.872482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.872663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.872696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.872821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.872860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.872997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.873030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.873216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.873251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.873431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.873464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.873596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.873630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.873874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.873907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.874089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.874123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.874361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.874395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.874570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.874603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.874807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.874840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.875024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.875057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.875256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.875291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.875429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.875463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.875644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.875677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.875816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.875850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.876087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.876122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.876307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.876342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.876460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.876493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.876757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.876791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.876981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.877015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.877186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.877229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.877485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.877518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.877633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.877666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.877850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.877883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.878063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.878096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.878241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.878277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.878471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.878504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.878774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.878808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.878960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.878993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.879200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.879246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.879397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.879431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.879610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.879644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.879752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.011 [2024-11-20 08:27:37.879784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.011 qpair failed and we were unable to recover it. 00:30:24.011 [2024-11-20 08:27:37.879975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.880009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.880220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.880254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.880494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.880527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.880661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.880696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.880932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.880965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.881223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.881259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.881463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.881496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.881614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.881653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.881899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.881933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.882084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.882118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.882254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.882288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.882495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.882527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.882707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.882741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.882936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.882969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.883164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.883198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.883413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.883448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.883619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.883654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.883840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.883872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.884056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.884091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.884229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.884263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.884526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.884560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.884742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.884776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.884889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.884922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.885045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.885078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.885254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.885290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.885409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.885441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.885685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.885718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.885850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.885884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.886151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.886184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.886450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.886483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.886656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.886690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.886863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.886897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.887143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.887177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.887321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.887355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.887611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.887644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.887851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.012 [2024-11-20 08:27:37.887886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.012 qpair failed and we were unable to recover it. 00:30:24.012 [2024-11-20 08:27:37.888071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.888104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.888227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.888260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.888432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.888465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.888655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.888688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.888869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.888903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.889044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.889077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.889268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.889303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.889487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.889520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.889701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.889735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.889928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.889962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.890152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.890185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.890454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.890492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.890759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.890793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.890920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.890954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.891221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.891255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.891385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.891419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.891634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.891668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.891912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.891945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.892128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.892161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.892440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.892475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.892753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.892785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.892957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.892990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.893168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.893211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.893400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.893434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.893549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.893582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.893779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.893812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.894015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.013 [2024-11-20 08:27:37.894048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.013 qpair failed and we were unable to recover it. 00:30:24.013 [2024-11-20 08:27:37.894160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.894193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.894415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.894448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.894662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.894695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.894811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.894844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.894963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.894997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.895170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.895213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.895499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.895532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.895646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.895678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.895865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.895898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.896156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.896190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.896373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.896407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.896627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.896660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.896791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.896825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.897013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.897046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.897239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.897274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.897397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.897430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.897642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.897675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.897866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.897899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.898090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.898124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.898331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.898366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.898496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.898530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.898767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.898800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.899007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.899040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.899219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.899254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.899442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.899481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.899671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.899706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.899883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.899916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.900084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.900117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.900235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.014 [2024-11-20 08:27:37.900270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.014 qpair failed and we were unable to recover it. 00:30:24.014 [2024-11-20 08:27:37.900468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.900502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.900763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.900796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.900966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.900998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.901174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.901216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.901409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.901441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.901647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.901680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.901802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.901836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.902009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.902043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.902224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.902258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.902438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.902472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.902667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.902699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.902831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.902864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.903056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.903090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.903193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.903247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.903357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.903388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.903631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.903664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.903782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.903815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.904058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.904093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.904290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.904325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.904449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.904482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.904724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.904757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.904973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.905006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.905193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.905237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.905359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.905392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.905578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.905612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.905744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.905777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.906018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.906052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.906235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.906270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.906450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.906483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.906666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.906701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.906831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.906864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.907067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.015 [2024-11-20 08:27:37.907100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.015 qpair failed and we were unable to recover it. 00:30:24.015 [2024-11-20 08:27:37.907269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.907304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.907490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.907524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.907719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.907752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.907934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.907973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.908079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.908112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.908378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.908413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.908550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.908583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.908704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.908737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.908871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.908904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.909174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.909215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.909389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.909422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.909639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.909672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.909929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.909962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.910223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.910258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.910477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.910510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.910644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.910678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.910781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.910814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.911068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.911103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.911291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.911325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.911443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.911476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.911667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.911701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.911897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.911930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.912104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.912138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.912346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.912382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.912571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.912604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.912736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.912770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.912952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.912985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.913161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.913196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.913395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.016 [2024-11-20 08:27:37.913428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.016 qpair failed and we were unable to recover it. 00:30:24.016 [2024-11-20 08:27:37.913610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.913643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.913911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.913983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.914184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.914233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.914426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.914460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.914606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.914639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.914825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.914859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.915151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.915184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.915387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.915422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.915534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.915567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.915751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.915783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.915908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.915940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.916073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.916106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.916292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.916327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.916450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.916483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.916726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.916769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.916907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.916940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.917146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.917180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.917368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.917403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.917519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.917551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.917670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.917704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.917896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.917929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.918118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.918151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.918427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.918462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.918652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.918686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.918922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.918954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.919083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.919117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.919234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.919268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.919389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.919422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.919632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.017 [2024-11-20 08:27:37.919666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.017 qpair failed and we were unable to recover it. 00:30:24.017 [2024-11-20 08:27:37.919854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.919885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.920068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.920102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.920288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.920321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.920508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.920541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.920670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.920703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.920890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.920922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.921033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.921066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.921239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.921273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.921443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.921476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.921578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.921611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.921898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.921931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.922052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.922085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.922288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.922325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.922436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.922470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.922684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.922717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.922902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.922934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.923118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.923151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.923344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.923378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.923650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.923683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.923789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.923820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.924032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.924064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.924249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.018 [2024-11-20 08:27:37.924284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.018 qpair failed and we were unable to recover it. 00:30:24.018 [2024-11-20 08:27:37.924496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.924529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.924716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.924748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.925003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.925036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.925161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.925195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.925442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.925477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.925662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.925695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.925957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.925990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.926110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.926144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.926400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.926434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.926619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.926652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.926791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.926825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.927090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.927124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.927233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.927268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.927386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.927419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.927600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.927635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.927897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.927930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.928043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.928074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.928355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.928389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.928589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.928622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.928877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.928911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.929109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.929142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.929420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.929455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.929669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.929701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.929940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.929974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.930165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.930199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.930404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.930438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.930676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.930710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.930913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.930946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.931219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.931254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.931464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.931497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.019 [2024-11-20 08:27:37.931619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.019 [2024-11-20 08:27:37.931658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.019 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.931874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.931908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.932172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.932213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.932400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.932434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.932618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.932652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.932852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.932885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.933076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.933110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.933239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.933275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.933472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.933505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.933691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.933724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.933910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.933944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.934185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.934228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.934353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.934385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.934584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.934617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.934743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.934776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.934897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.934929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.935145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.935179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.935431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.935464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.935599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.935632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.935813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.935847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.936030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.936064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.936254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.936289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.936475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.936508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.936712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.936744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.937008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.937042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.937182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.937225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.937411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.937444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.937712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.937745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.937938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.937971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.938247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.938282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.938403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.938437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.938678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.020 [2024-11-20 08:27:37.938711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.020 qpair failed and we were unable to recover it. 00:30:24.020 [2024-11-20 08:27:37.938964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.938996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.939209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.939244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.939494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.939527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.939765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.939797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.939970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.940004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.940185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.940227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.940476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.940509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.940775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.940808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.940999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.941038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.941169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.941210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.941340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.941373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.941564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.941597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.941715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.941748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.942015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.942049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.942237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.942272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.942445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.942478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.942604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.942638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.942928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.942962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.943080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.943114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.943362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.943396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.943519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.943552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.943675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.943708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.943846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.943878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.944051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.944084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.944270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.944305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.944431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.944464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.944630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.944662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.944838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.944872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.944996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.945029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.945222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.021 [2024-11-20 08:27:37.945256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.021 qpair failed and we were unable to recover it. 00:30:24.021 [2024-11-20 08:27:37.945438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.945472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.945645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.945677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.945858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.945891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.946062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.946096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.946270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.946304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.946431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.946464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.946756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.946789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.947007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.947040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.947238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.947273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.947409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.947442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.947662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.947695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.947867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.947900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.948089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.948121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.948319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.948354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.948547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.948579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.948690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.948723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.948916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.948949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.949146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.949180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.949375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.949434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.949557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.949591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.949707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.949738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.949941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.949974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.950149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.950181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.950407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.950441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.950644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.950678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.950934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.950968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.951163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.951196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.951339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.951373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.951480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.951512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.951702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.951735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.022 [2024-11-20 08:27:37.951948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.022 [2024-11-20 08:27:37.951981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.022 qpair failed and we were unable to recover it. 00:30:24.315 [2024-11-20 08:27:37.952160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.315 [2024-11-20 08:27:37.952193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.315 qpair failed and we were unable to recover it. 00:30:24.315 [2024-11-20 08:27:37.952423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.315 [2024-11-20 08:27:37.952457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.315 qpair failed and we were unable to recover it. 00:30:24.315 [2024-11-20 08:27:37.952648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.315 [2024-11-20 08:27:37.952680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.315 qpair failed and we were unable to recover it. 00:30:24.315 [2024-11-20 08:27:37.952862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.315 [2024-11-20 08:27:37.952895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.315 qpair failed and we were unable to recover it. 00:30:24.315 [2024-11-20 08:27:37.953018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.315 [2024-11-20 08:27:37.953051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.315 qpair failed and we were unable to recover it. 00:30:24.315 [2024-11-20 08:27:37.953242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.315 [2024-11-20 08:27:37.953276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.315 qpair failed and we were unable to recover it. 00:30:24.315 [2024-11-20 08:27:37.953465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.315 [2024-11-20 08:27:37.953498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.315 qpair failed and we were unable to recover it. 00:30:24.315 [2024-11-20 08:27:37.953685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.315 [2024-11-20 08:27:37.953719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.315 qpair failed and we were unable to recover it. 00:30:24.315 [2024-11-20 08:27:37.953902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.315 [2024-11-20 08:27:37.953935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.315 qpair failed and we were unable to recover it. 00:30:24.315 [2024-11-20 08:27:37.954123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.315 [2024-11-20 08:27:37.954156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.315 qpair failed and we were unable to recover it. 00:30:24.315 [2024-11-20 08:27:37.954422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.315 [2024-11-20 08:27:37.954457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.315 qpair failed and we were unable to recover it. 00:30:24.315 [2024-11-20 08:27:37.954635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.315 [2024-11-20 08:27:37.954669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.315 qpair failed and we were unable to recover it. 00:30:24.315 [2024-11-20 08:27:37.954802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.315 [2024-11-20 08:27:37.954835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.315 qpair failed and we were unable to recover it. 00:30:24.315 [2024-11-20 08:27:37.954960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.315 [2024-11-20 08:27:37.954993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.315 qpair failed and we were unable to recover it. 00:30:24.315 [2024-11-20 08:27:37.955188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.315 [2024-11-20 08:27:37.955232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.315 qpair failed and we were unable to recover it. 00:30:24.315 [2024-11-20 08:27:37.955350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.315 [2024-11-20 08:27:37.955382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.315 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.955622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.955654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.955769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.955802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.955922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.955956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.956096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.956129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.956313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.956347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.956460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.956493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.956672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.956705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.956825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.956858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.956976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.957009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.957187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.957233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.957363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.957396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.957503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.957548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.957680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.957714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.957825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.957858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.957970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.958004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.958174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.958217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.958398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.958433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.958607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.958641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.958746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.958778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.958951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.958984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.959111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.959145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.959410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.959444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.959684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.959717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.959891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.959924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.960104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.960138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.960269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.960304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.960506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.960540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.960777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.960810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.961012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.961045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.961228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.961262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.961389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.961422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.316 qpair failed and we were unable to recover it. 00:30:24.316 [2024-11-20 08:27:37.961699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.316 [2024-11-20 08:27:37.961732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.961857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.961890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.962127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.962160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.962304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.962339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.962547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.962581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.962693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.962726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.962898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.962931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.963064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.963097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.963237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.963272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.963413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.963446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.963622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.963655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.963894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.963927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.964166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.964199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.964470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.964505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.964619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.964652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.964867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.964900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.965091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.965125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.965366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.965401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.965579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.965612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.965791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.965824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.965945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.965984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.966168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.966230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.966407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.966440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.966611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.966645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.966839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.966873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.967072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.967105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.967290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.967325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.967505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.967538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.967750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.967783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.968030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.968062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.317 [2024-11-20 08:27:37.968300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.317 [2024-11-20 08:27:37.968335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.317 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.968537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.968570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.968744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.968777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.969016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.969048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.969229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.969264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.969455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.969489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.969678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.969712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.969882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.969916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.970099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.970133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.970371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.970405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.970588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.970621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.970759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.970792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.971005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.971039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.971222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.971258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.971468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.971502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.971747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.971780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.971965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.972000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.972116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.972150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.972353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.972386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.972514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.972548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.972790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.972823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.972986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.973020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.973156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.973189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.973488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.973522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.973767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.973801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.973976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.974010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.974236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.974271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.974537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.974569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.974695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.974729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.974858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.974891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.975015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.318 [2024-11-20 08:27:37.975054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.318 qpair failed and we were unable to recover it. 00:30:24.318 [2024-11-20 08:27:37.975311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.975346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.975529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.975563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.975693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.975726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.975919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.975952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.976199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.976241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.976428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.976460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.976644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.976677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.976860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.976894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.977165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.977198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.977387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.977421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.977608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.977642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.977825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.977859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.978046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.978079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.978269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.978304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.978410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.978441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.978633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.978666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.978793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.978827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.979003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.979036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.979165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.979198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.979397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.979431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.979534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.979567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.979808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.979841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.980114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.980147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.980337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.980372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.980645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.980679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.980795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.980827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.981093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.981127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.981353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.981388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.981509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.981542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.319 qpair failed and we were unable to recover it. 00:30:24.319 [2024-11-20 08:27:37.981712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.319 [2024-11-20 08:27:37.981745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.981933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.981965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.982083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.982117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.982288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.982323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.982502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.982536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.982719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.982753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.982936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.982968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.983221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.983255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.983434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.983467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.983642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.983676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.983790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.983829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.983952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.983986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.984177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.984217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.984485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.984518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.984636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.984669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.984839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.984873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.985110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.985142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.985321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.985357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.985622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.985656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.985862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.985898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.986067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.986101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.986248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.986283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.986396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.986429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.986598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.986632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.986767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.986801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.986975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.987008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.987195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.987237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.987478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.987512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.987633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.987667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.320 [2024-11-20 08:27:37.987788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.320 [2024-11-20 08:27:37.987822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.320 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.987946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.987979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.988173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.988213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.988404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.988438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.988624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.988658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.988869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.988902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.989019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.989053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.989231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.989265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.989448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.989481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.989682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.989715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.989906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.989939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.990182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.990237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.990440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.990474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.990739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.990772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.990961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.990995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.991264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.991298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.991421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.991454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.991644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.991677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.991930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.991963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.992231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.992266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.992471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.992503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.321 [2024-11-20 08:27:37.992626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.321 [2024-11-20 08:27:37.992665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.321 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.992792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.992825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.992944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.992976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.993237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.993272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.993489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.993522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.993696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.993730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.993920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.993953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.994072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.994105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.994276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.994311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.994446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.994480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.994664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.994698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.994806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.994840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.995022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.995054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.995231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.995266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.995528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.995562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.995686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.995719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.995905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.995938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.996174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.996215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.996341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.996375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.996547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.996580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.996721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.996754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.996927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.996961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.997148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.997180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.997366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.997400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.997612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.997646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.997915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.997948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.998239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.998273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.998544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.998577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.998706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.998740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.998924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.998958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.322 qpair failed and we were unable to recover it. 00:30:24.322 [2024-11-20 08:27:37.999150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.322 [2024-11-20 08:27:37.999183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:37.999303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:37.999336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:37.999604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:37.999638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:37.999828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:37.999862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.000034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.000068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.000243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.000278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.000396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.000429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.000602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.000635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.000878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.000911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.001095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.001128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.001304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.001350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.001590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.001624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.001840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.001873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.002139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.002173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.002376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.002409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.002669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.002703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.002880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.002914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.003155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.003188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.003490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.003524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.003778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.003812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.003920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.003952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.004140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.004173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.004428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.004462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.004653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.004687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.004863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.004897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.005137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.005171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.005450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.005484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.005667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.005701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.005915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.005948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.006239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.323 [2024-11-20 08:27:38.006275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.323 qpair failed and we were unable to recover it. 00:30:24.323 [2024-11-20 08:27:38.006388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.006422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.006613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.006647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.006913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.006946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.007129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.007163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.007290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.007324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.007495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.007529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.007699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.007732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.007977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.008051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.008291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.008330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.008599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.008633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.008878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.008911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.009052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.009085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.009276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.009312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.009507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.009540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.009668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.009701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.009887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.009920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.010221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.010256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.010502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.010536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.010745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.010778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.010907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.010939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.011145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.011195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.011331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.011365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.011609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.011643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.011834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.011866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.012111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.012145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.012278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.012313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.012516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.012549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.012739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.012773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.012898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.012930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.013115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.013148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.013422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.324 [2024-11-20 08:27:38.013456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.324 qpair failed and we were unable to recover it. 00:30:24.324 [2024-11-20 08:27:38.013697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.013730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.013966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.014000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.014284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.014320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.014503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.014536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.014646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.014678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.014818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.014852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.015067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.015099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.015294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.015328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.015445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.015477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.015730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.015763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.015956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.015990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.016165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.016199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.016339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.016371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.016488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.016521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.016708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.016743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.016877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.016909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.017048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.017086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.017217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.017251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.017363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.017397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.017497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.017531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.017798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.017848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.018052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.018084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.018326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.018362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.018469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.018502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.018679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.018714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.018979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.019012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.019238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.019272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.019457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.019491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.019678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.019711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.019900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.019939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.020073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.020107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.020365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.020400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.020628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.020661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.020775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.325 [2024-11-20 08:27:38.020810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.325 qpair failed and we were unable to recover it. 00:30:24.325 [2024-11-20 08:27:38.020928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.020961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.021198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.021239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.021509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.021542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.021721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.021754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.021890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.021923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.022108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.022140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.022314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.022349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.022545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.022578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.022687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.022720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.022833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.022865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.023131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.023164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.023382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.023416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.023601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.023633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.023870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.023904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.024090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.024123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.024305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.024340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.024529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.024563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.024780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.024813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.025075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.025109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.025371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.025407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.025532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.025564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.025690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.025723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.026044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.026081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.026323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.026358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.026534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.026566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.326 qpair failed and we were unable to recover it. 00:30:24.326 [2024-11-20 08:27:38.026801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.326 [2024-11-20 08:27:38.026835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.026972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.027004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.027176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.027218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.027333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.027363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.027494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.027527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.027741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.027773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.027988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.028021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.028212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.028246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.028458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.028491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.028742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.028774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.028942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.028982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.029186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.029228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.029495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.029527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.029784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.029817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.030026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.030059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.030330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.030366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.030486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.030519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.030760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.030793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.030975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.031008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.031267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.031301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.031428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.031461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.031637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.031670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.031851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.031884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.032094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.032128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.032325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.032360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.032497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.032531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.327 qpair failed and we were unable to recover it. 00:30:24.327 [2024-11-20 08:27:38.032793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.327 [2024-11-20 08:27:38.032827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.033070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.033103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.033242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.033276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.033460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.033492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.033704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.033738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.033924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.033956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.034138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.034171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.034413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.034450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.034653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.034687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.034818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.034850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.034977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.035010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.035218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.035254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.035393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.035427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.035578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.035611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.035798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.035830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.036068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.036103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.036226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.036262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.036437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.036470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.036717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.036751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.036991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.037025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.037144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.037177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.037387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.037421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.037611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.037643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.037772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.037805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.037994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.038027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.038232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.038267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.038446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.038479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.038602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.038634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.038777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.038809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.039013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.039046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.039158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.039191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.328 [2024-11-20 08:27:38.039472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.328 [2024-11-20 08:27:38.039504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.328 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.039692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.039725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.039905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.039938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.040092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.040124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.040295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.040331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.040576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.040609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.040717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.040749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.041001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.041035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.041274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.041308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.041489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.041521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.041726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.041759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.041954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.041986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.042185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.042241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.042440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.042473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.042611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.042644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.042820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.042853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.043041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.043074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.043210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.043244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.043559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.043593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.043856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.043888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.044098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.044136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.044316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.044350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.044539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.044572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.044785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.044819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.045028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.045062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.045173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.045214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.045423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.045456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.045635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.045669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.045918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.045950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.046101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.046135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.046327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.329 [2024-11-20 08:27:38.046362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.329 qpair failed and we were unable to recover it. 00:30:24.329 [2024-11-20 08:27:38.046603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.046636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.046878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.046912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.047015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.047047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.047229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.047264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.047465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.047498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.047703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.047737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.047927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.047960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.048154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.048187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.048321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.048355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.048482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.048515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.048711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.048744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.048986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.049020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.049199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.049240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.049394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.049427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.049693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.049725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.049898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.049932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.050067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.050101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.050339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.050374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.050503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.050537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.050742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.050775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.050967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.050999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.051172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.051214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.051425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.051459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.051648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.051679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.051971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.052004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.052246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.052281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.052459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.052492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.052681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.052716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.052967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.053000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.053218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.053259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.053525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.330 [2024-11-20 08:27:38.053559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.330 qpair failed and we were unable to recover it. 00:30:24.330 [2024-11-20 08:27:38.053664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.053697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.053886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.053920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.054118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.054150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.054368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.054403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.054672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.054705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.054914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.054948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.055136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.055168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.055302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.055338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.055454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.055486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.055749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.055782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.055925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.055958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.056096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.056128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.056324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.056359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.056476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.056508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.056687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.056720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.056826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.056859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.057035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.057068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.057266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.057300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.057421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.057454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.057643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.057676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.057869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.057902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.058162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.058195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.058357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.058390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.058518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.058552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.058661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.058694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.058904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.058938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.059115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.059148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.059286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.059319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.059565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.059599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.059820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.331 [2024-11-20 08:27:38.059852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.331 qpair failed and we were unable to recover it. 00:30:24.331 [2024-11-20 08:27:38.059962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.059995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.060193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.060237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.060368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.060401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.060600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.060633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.060935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.060968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.061087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.061120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.061361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.061396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.061589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.061622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.061745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.061784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.062000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.062033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.062276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.062311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.062449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.062482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.062603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.062636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.062826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.062858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.063102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.063136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.063320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.063354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.063469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.063502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.063750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.063783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.064045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.064080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.064223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.064257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.064432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.064464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.064636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.064670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.064862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.064895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.065082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.065115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.065231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.065265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.332 [2024-11-20 08:27:38.065542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.332 [2024-11-20 08:27:38.065575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.332 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.065844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.065876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.066068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.066101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.066369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.066405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.066600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.066633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.066817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.066850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.067065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.067097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.067277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.067312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.067438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.067471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.067736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.067768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.067916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.067950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.068171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.068210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.068399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.068432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.068622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.068656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.068834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.068867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.069141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.069175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.069313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.069347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.069569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.069601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.069810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.069843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.070037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.070070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.070253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.070287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.070479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.070512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.070629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.070663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.070782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.070820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.070934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.070968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.071215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.071250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.071441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.071472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.071684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.071718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.071930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.071962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.333 [2024-11-20 08:27:38.072150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.333 [2024-11-20 08:27:38.072183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.333 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.072455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.072488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.072596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.072629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.072846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.072878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.073165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.073198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.073386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.073418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.073612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.073646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.073830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.073862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.074045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.074078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.074331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.074366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.074559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.074592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.074783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.074817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.075006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.075039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.075238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.075273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.075539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.075572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.075817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.075850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.076092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.076125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.076257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.076291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.076478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.076511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.076696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.076730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.076983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.077015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.077139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.077172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.077313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.077346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.077530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.077561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.077832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.077865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.077984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.078017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.078194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.078256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.078465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.078500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.078675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.078707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.078883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.078917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.079106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.334 [2024-11-20 08:27:38.079140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.334 qpair failed and we were unable to recover it. 00:30:24.334 [2024-11-20 08:27:38.079329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.079364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.079550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.079583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.079768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.079801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.079918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.079956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.080225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.080260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.080432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.080466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.080592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.080625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.080838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.080872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.081151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.081183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.081463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.081498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.081635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.081668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.081810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.081843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.081972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.082006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.082189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.082230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.082404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.082436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.082626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.082660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.082777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.082809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.082992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.083026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.083151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.083184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.083458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.083492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.083626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.083659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.083840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.083872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.084054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.084088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.084223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.084257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.084498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.084532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.084703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.084737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.084913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.084944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.085080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.085113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.085291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.085325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.085450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.085483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.085726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.335 [2024-11-20 08:27:38.085759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.335 qpair failed and we were unable to recover it. 00:30:24.335 [2024-11-20 08:27:38.086012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.086046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.086241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.086275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.086533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.086565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.086700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.086733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.086922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.086955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.087135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.087168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.087381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.087415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.087541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.087573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.087773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.087806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.088000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.088033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.088275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.088310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.088571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.088604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.088804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.088842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.088965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.088998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.089123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.089156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.089293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.089327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.089444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.089478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.089589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.089621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.089861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.089893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.090158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.090191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.090425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.090459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.090730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.090762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.091017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.091049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.091289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.091324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.091538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.091571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.091699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.091731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.092003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.092037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.092235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.092269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.092448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.092481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.092684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.092718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.092892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.092925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.093191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.093232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.336 [2024-11-20 08:27:38.093434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.336 [2024-11-20 08:27:38.093467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.336 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.093652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.093685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.093810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.093844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.093966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.093998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.094174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.094230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.094434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.094466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.094669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.094702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.094908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.094942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.095136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.095168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.095467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.095501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.095623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.095656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.095783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.095815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.096002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.096035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.096229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.096264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.096436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.096468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.096671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.096703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.096908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.096940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.097213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.097247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.097438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.097471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.097713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.097745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.097918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.097955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.098142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.098175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.098314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.098348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.098462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.098495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.098618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.098651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.098905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.098937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.099045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.099078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.099275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.099309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.099436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.099468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.099714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.099747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.337 qpair failed and we were unable to recover it. 00:30:24.337 [2024-11-20 08:27:38.099946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.337 [2024-11-20 08:27:38.099979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.100166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.100199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.100320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.100353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.100560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.100593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.100842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.100875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.101047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.101080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.101274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.101309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.101550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.101583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.101704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.101738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.101934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.101967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.102181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.102230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.102373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.102405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.102608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.102640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.102882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.102914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.103095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.103128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.103345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.103378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.103553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.103586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.103643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x192daf0 (9): Bad file descriptor 00:30:24.338 [2024-11-20 08:27:38.103916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.103988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.104157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.104194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.104429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.104464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.104734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.104767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.104897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.104931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.105109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.105143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.105387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.105422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.105666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.105699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.105836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.105870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.338 qpair failed and we were unable to recover it. 00:30:24.338 [2024-11-20 08:27:38.106140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.338 [2024-11-20 08:27:38.106173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.106450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.106485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.106673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.106706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.106893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.106925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.107198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.107244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.107433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.107467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.107674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.107706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.107947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.107979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.108226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.108262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.108455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.108489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.108747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.108780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.108975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.109010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.109220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.109255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.109501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.109535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.109724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.109759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.109950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.109983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.110162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.110196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.110330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.110370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.110502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.110535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.110799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.110831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.111004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.111037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.111225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.111260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.111381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.111413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.111588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.111622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.111749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.111782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.111901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.111935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.112196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.112241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.112428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.112462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.112680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.112713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.112908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.112942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.113136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.339 [2024-11-20 08:27:38.113170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.339 qpair failed and we were unable to recover it. 00:30:24.339 [2024-11-20 08:27:38.113386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.113420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.113606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.113639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.113763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.113798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.113922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.113955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.114166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.114199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.114408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.114442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.114709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.114742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.114852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.114884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.115089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.115123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.115350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.115386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.115631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.115664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.115844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.115877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.116003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.116036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.116166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.116200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.116425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.116459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.116633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.116666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.116843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.116876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.117058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.117091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.117270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.117305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.117544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.117578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.117693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.117727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.117918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.117951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.118136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.118170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.118356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.118389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.118580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.118614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.118904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.118937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.119197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.119248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.119517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.119550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.119681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.119714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.119954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.119988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.120173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.120218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.120476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.120510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.120776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.120809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.120930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.120963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.340 [2024-11-20 08:27:38.121156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.340 [2024-11-20 08:27:38.121189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.340 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.121414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.121449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.121572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.121606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.121847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.121880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.122060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.122093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.122284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.122318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.122502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.122536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.122782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.122815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.123084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.123119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.123254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.123289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.123478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.123511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.123725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.123758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.124044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.124077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.124252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.124287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.124533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.124567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.124807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.124841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.125014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.125047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.125164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.125197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.125399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.125432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.125628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.125661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.125928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.125961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.126141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.126174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.126314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.126348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.126468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.126500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.126691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.126724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.126995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.127028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.127159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.127192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.127334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.127367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.127477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.127509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.127699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.127732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.127852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.127885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.128102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.128135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.128313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.128354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.128534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.341 [2024-11-20 08:27:38.128567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.341 qpair failed and we were unable to recover it. 00:30:24.341 [2024-11-20 08:27:38.128744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.128777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.129024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.129057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.129186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.129229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.129447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.129481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.129596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.129629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.129905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.129938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.130058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.130092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.130352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.130386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.130516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.130550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.130819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.130852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.131036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.131070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.131200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.131243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.131379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.131413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.131606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.131640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.131881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.131914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.132264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.132298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.132475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.132509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.132699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.132732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.132929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.132962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.133141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.133174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.133374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.133407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.133625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.133658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.133850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.133883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.134078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.134111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.134299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.134335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.134590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.134625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.134736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.134768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.134947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.134980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.135233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.135268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.135493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.135526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.135746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.135779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.135958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.135991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.342 qpair failed and we were unable to recover it. 00:30:24.342 [2024-11-20 08:27:38.136117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.342 [2024-11-20 08:27:38.136150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.136494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.136529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.136673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.136706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.136889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.136923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.137101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.137134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.137326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.137362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.137502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.137541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.137733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.137767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.138017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.138050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.138258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.138293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.138540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.138573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.138820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.138854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.139120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.139153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.139363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.139397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.139539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.139573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.139814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.139846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.140089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.140122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.140250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.140284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.140425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.140459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.140701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.140734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.140968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.141002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.141139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.141172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.141367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.141402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.141669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.141702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.141813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.141847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.142041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.142074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.142262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.343 [2024-11-20 08:27:38.142297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.343 qpair failed and we were unable to recover it. 00:30:24.343 [2024-11-20 08:27:38.142488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.142522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.142708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.142742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.142932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.142964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.143157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.143191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.143377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.143411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.143543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.143576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.143759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.143793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.144059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.144093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.144244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.144278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.144464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.144499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.144705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.144738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.144915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.144949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.145141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.145173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.145388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.145422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.145664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.145698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.145841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.145874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.146063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.146096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.146216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.146251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.146446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.146480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.146691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.146730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.146879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.146912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.147061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.147095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.147273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.147308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.147591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.147624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.147737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.147770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.147963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.147996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.148173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.148214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.148354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.148387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.148557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.148590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.148773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.148808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.149078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.149111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.149285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.344 [2024-11-20 08:27:38.149333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.344 qpair failed and we were unable to recover it. 00:30:24.344 [2024-11-20 08:27:38.149455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.149487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.149779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.149812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.149993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.150027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.150271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.150305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.150492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.150526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.150744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.150778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.150964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.150997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.151120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.151154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.151369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.151404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.151595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.151628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.151823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.151856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.152048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.152081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.152225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.152260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.152377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.152411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.152676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.152749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.153040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.153077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.153259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.153295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.153483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.153516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.153701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.153734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.153943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.153977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.154165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.154199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.154389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.154423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.154662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.154695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.154884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.154917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.155042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.155075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.155186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.155226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.155414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.155448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.155667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.155710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.155906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.155939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.156137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.156171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.156372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.345 [2024-11-20 08:27:38.156406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.345 qpair failed and we were unable to recover it. 00:30:24.345 [2024-11-20 08:27:38.156609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.156643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.156833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.156865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.157069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.157102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.157278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.157313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.157513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.157546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.157722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.157755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.157890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.157923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.158107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.158140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.158335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.158370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.158553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.158586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.158708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.158742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.158946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.158979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.159169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.159210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.159456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.159490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.159667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.159701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1860561 Killed "${NVMF_APP[@]}" "$@" 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.159842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.159876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.160006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.160038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.160167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.160200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.160319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.160351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:30:24.346 [2024-11-20 08:27:38.160556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.160590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.160771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.160805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.160999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.161033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:24.346 [2024-11-20 08:27:38.161229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.161264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:24.346 [2024-11-20 08:27:38.161469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.161503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.161687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.161722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.161966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.161998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.346 [2024-11-20 08:27:38.162214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.346 [2024-11-20 08:27:38.162249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.346 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.162515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.162547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.162720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.162753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.162883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.162917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.163099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.163132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.163396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.163429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.163584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.163617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.163798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.163832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.163960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.163993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.164215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.164250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.164527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.164560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.164696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.164729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.164973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.165006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.165150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.165183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.165376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.165408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.165589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.165622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.165797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.165831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.165961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.165994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.166278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.166314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.166489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.166521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.166736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.166769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.166893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.166927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.167115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.167149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.167343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.167376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.167497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.167530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.167707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.167741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.167946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.167980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.168119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.168151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@328 -- # nvmfpid=1861279 00:30:24.347 [2024-11-20 08:27:38.168388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.168422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.168609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.168643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@329 -- # waitforlisten 1861279 00:30:24.347 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:30:24.347 [2024-11-20 08:27:38.168861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.168896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 [2024-11-20 08:27:38.169003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.347 [2024-11-20 08:27:38.169036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.347 qpair failed and we were unable to recover it. 00:30:24.347 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 1861279 ']' 00:30:24.347 [2024-11-20 08:27:38.169311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.169351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.348 [2024-11-20 08:27:38.169593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.169626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:24.348 [2024-11-20 08:27:38.169828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.169861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.348 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.348 [2024-11-20 08:27:38.170037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.170071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.170269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:24.348 [2024-11-20 08:27:38.170305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.170555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.348 [2024-11-20 08:27:38.170589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.170779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.170813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.170992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.171025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.171152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.171185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.171380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.171413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.171655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.171687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.171829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.171863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.172042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.172075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.172348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.172383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.172523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.172557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.172678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.172714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.172895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.172928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.173126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.173160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.173374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.173411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.173618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.173652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.173797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.173830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.174012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.174046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.174182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.174225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.174356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.174390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.174678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.174718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.174853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.174886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.175135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.175167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.175307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.348 [2024-11-20 08:27:38.175340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.348 qpair failed and we were unable to recover it. 00:30:24.348 [2024-11-20 08:27:38.175475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.175509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.175749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.175782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.175980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.176012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.176122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.176155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.176307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.176341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.176519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.176553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.176820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.176852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.177059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.177092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.177241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.177275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.177394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.177426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.177572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.177606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.177791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.177824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.178004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.178037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.178157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.178191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.178331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.178366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.178550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.178582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.178691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.178723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.178912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.178945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.179131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.179164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.179355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.179388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.179578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.179611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.179715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.179747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.179924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.179958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.180129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.180217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.180467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.180506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.180703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.180738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.349 [2024-11-20 08:27:38.180872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.349 [2024-11-20 08:27:38.180907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.349 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.181164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.181198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.181342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.181377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.181496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.181529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.181772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.181806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.181984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.182017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.182132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.182164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.182353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.182387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.182580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.182613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.182805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.182838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.183023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.183066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.183359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.183393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.183601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.183634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.183901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.183935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.184087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.184120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.184253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.184288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.184495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.184531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.184776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.184809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.185018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.185052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.185228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.185263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.185533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.185567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.185762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.185795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.185917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.185952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.186084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.186117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.186243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.186279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.186455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.186488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.186608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.186640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.186838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.186871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.187048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.187081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.187255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.187289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.187496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.187529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.187660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.187693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.350 [2024-11-20 08:27:38.187816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.350 [2024-11-20 08:27:38.187850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.350 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.188094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.188128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.188246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.188281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.188420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.188455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.188661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.188695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.188876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.188947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.189163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.189200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.189429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.189462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.189708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.189741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.189871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.189904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.190022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.190055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.190269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.190304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.190413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.190445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.190692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.190723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.190904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.190937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.191078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.191112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.191234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.191269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.191461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.191494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.191622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.191666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.191865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.191898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.192035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.192068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.192305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.192340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.192556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.192589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.192719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.192752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.192956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.192988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.193166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.193199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.193465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.193498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.193660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.193693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.193892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.193925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.194100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.194132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.194335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.351 [2024-11-20 08:27:38.194370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.351 qpair failed and we were unable to recover it. 00:30:24.351 [2024-11-20 08:27:38.194488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.194522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.194663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.194697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.194820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.194852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.194985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.195019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.195168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.195210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.195419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.195452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.195572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.195605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.195795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.195829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.196020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.196052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.196249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.196284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.196547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.196581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.196766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.196800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.196929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.196962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.197173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.197217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.197413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.197487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.197713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.197752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.197941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.197977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.198181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.198234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.198418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.198452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.198721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.198755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.198948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.198981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.199128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.199160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.199296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.199330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.199477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.199510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.199750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.199783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.199960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.199993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.200188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.200234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.200532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.200565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.200763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.200798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.200990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.201022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.201237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.201272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.201469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.201502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.201798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.201831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.352 [2024-11-20 08:27:38.202029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.352 [2024-11-20 08:27:38.202062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.352 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.202213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.202249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.202362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.202395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.202588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.202621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.202862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.202894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.203072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.203112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.203290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.203324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.203512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.203544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.203787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.203829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.203957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.203990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.204173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.204234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.204418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.204450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.204634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.204667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.204788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.204822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.204949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.204982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.205115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.205148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.205282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.205316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.205504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.205538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.205714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.205748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.205864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.205897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.206098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.206131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.206267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.206302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.206420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.206453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.206626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.206659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.206901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.206934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.207182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.207225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.207359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.207391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.207581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.207615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.207812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.207844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.207959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.207992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.208115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.208148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.208290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.208324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.353 qpair failed and we were unable to recover it. 00:30:24.353 [2024-11-20 08:27:38.208444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.353 [2024-11-20 08:27:38.208478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.208727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.208761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.208882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.208915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.209091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.209130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.209259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.209296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.209477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.209510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.209642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.209675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.209920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.209954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.210134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.210168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.210298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.210337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.210514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.210547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.210754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.210788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.210906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.210938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.211115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.211148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.211362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.211397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.211590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.211623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.211815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.211848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.212049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.212083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.212227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.212263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.212397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.212430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.212613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.212645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.212848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.212881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.213006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.213039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.213281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.213316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.213496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.213528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.213708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.213740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.213874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.213907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.354 [2024-11-20 08:27:38.214086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.354 [2024-11-20 08:27:38.214118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.354 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.214306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.214341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.214556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.214590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.214804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.214841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.214951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.214985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.215158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.215191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.215391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.215425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.215546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.215580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.215712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.215745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.215927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.215959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.216072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.216105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.216318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.216354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.216526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.216559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.216694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.216728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.216918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.216952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.217060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.217093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.217265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.217300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.217491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.217525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.217622] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:30:24.355 [2024-11-20 08:27:38.217662] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:24.355 [2024-11-20 08:27:38.217706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.217739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.217928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.217958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.218144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.218175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.218300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.218337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.218447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.218479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.218598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.218631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.218806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.218839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.218957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.218990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.219120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.219153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.219347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.219381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.219590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.219624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.219830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.219864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.219970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.220004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.355 qpair failed and we were unable to recover it. 00:30:24.355 [2024-11-20 08:27:38.220224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.355 [2024-11-20 08:27:38.220258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.220388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.220424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.220613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.220646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.220847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.220880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.221097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.221130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.221251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.221286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.221487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.221519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.221761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.221795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.221995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.222029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.222220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.222255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.222513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.222547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.222664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.222704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.222833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.222867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.222991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.223024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.223155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.223190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.223441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.223476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.223666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.223699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.223890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.223924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.224045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.224077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.224214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.224249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.224419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.224451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.224640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.224674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.224794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.224827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.224940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.224973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.225096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.225131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.225287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.225323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.225516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.225551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.225674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.225708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.225895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.225929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.226055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.226088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.226200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.226248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.226373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.226405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.226511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.226544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.226656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.356 [2024-11-20 08:27:38.226688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.356 qpair failed and we were unable to recover it. 00:30:24.356 [2024-11-20 08:27:38.226890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.226923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.227042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.227074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.227229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.227264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.227439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.227471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.227609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.227643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.227806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.227839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.228028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.228062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.228327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.228362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.228553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.228587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.228821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.228855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.229031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.229064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.229334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.229369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.229508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.229541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.229728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.229762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.230007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.230041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.230184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.230227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.230337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.230370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.230516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.230557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.230690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.230724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.230933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.230966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.231074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.231109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.231221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.231254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.231457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.231489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.231602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.231636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.231761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.231794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.231982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.232015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.232148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.232181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.232373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.232408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.232523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.232558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.232748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.232781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.232968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.233000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.357 [2024-11-20 08:27:38.233190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.357 [2024-11-20 08:27:38.233248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.357 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.233366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.233399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.233669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.233703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.233846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.233880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.233998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.234031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.234273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.234309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.234430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.234463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.234583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.234615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.234729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.234764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.234881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.234914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.235103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.235136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.235273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.235307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.235491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.235526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.235667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.235703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.235965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.236000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.236257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.236292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.236425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.236458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.236567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.236601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.236780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.236813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.236927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.236960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.237096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.237130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.237308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.237344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.237541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.237574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.237750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.237782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.237900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.237934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.238071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.238105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.238298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.238338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.238527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.238561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.238769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.238801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.238977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.239010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.239139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.239172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.239374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.239408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.239529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.358 [2024-11-20 08:27:38.239562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.358 qpair failed and we were unable to recover it. 00:30:24.358 [2024-11-20 08:27:38.239764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.239797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.240046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.240080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.240187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.240228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.240475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.240509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.240694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.240728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.240865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.240900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.241087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.241121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.241323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.241358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.241481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.241515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.241648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.241683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.241858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.241892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.242007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.242043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.242224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.242260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.242456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.242490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.242669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.242702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.242882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.242917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.243044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.243078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.243216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.243249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.243436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.243469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.243649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.243682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.243805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.243839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.244110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.244145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.244277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.244311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.244525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.244559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.244737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.244771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.244911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.244947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.245133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.245167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.245315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.245349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.245460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.245494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.245604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.245638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.245768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.245802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.245929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.245962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.246163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.246196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.359 [2024-11-20 08:27:38.246451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.359 [2024-11-20 08:27:38.246491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.359 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.246666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.246701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.246832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.246866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.246987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.247021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.247223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.247258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.247378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.247412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.247532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.247565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.247695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.247728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.247852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.247888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.248105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.248139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.248256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.248291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.248471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.248505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.248699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.248750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.248945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.248981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.249117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.249152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.249301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.249335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.249446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.249479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.249663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.249696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.249813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.249846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.250028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.250061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.250234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.250269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.250400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.250432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.250538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.250571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.250750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.250783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.250960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.250995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.251122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.251155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.251282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.251318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.251494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.360 [2024-11-20 08:27:38.251534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.360 qpair failed and we were unable to recover it. 00:30:24.360 [2024-11-20 08:27:38.251709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.251745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.251856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.251890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.252065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.252100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.252290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.252326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.252452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.252485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.252668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.252700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.252902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.252935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.253121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.253153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.253367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.253401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.253602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.253636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.253829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.253861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.254049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.254084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.254214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.254248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.254441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.254475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.254657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.254690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.254812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.254845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.254966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.254999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.255243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.255278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.255387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.255420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.255541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.255575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.255750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.255782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.255908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.255942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.256124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.256158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.256418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.256453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.256579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.256613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.256814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.256847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.257033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.257072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.257189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.257234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.257476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.257510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.257636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.257670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.257781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.257815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.257929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.257962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.258239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.258275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.258449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.258482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.258597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.258630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.258744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.258777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.258893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.258927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.259052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.259085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.259274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.259309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.361 [2024-11-20 08:27:38.259502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.361 [2024-11-20 08:27:38.259535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.361 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.259662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.259697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.259815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.259847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.259962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.259994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.260171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.260213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.260404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.260438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.260564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.260597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.260771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.260805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.260981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.261014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.261124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.261158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.261349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.261383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.261496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.261529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.261638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.261671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.261858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.261891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.262003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.262041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.262167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.262200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.262345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.262378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.262509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.262543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.262738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.262771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.262958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.262992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.263185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.263229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.263345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.263379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.263620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.263654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.263784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.263818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.264013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.264046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.264228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.264264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.264440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.264474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.264647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.264681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.264812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.264853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.264992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.265029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.265144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.265178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.265311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.265346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.265458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.265491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.265684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.265718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.265902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.265936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.266111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.266144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.266344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.362 [2024-11-20 08:27:38.266380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.362 qpair failed and we were unable to recover it. 00:30:24.362 [2024-11-20 08:27:38.266558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.266591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.266710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.266743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.266862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.266895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.267077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.267111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.267305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.267347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.267532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.267565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.267736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.267769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.267881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.267914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.268091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.268124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.268259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.268294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.268545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.268579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.268779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.268813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.269031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.269064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.269317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.269351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.269498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.269531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.269732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.269765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.269874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.269908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.270036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.270069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.270261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.270296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.270440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.270474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.270615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.270649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.270843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.270876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.271124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.271157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.271406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.271442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.271623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.271657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.271768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.271801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.271933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.271967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.272096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.272130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.272248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.272283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.272409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.272444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.272557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.272590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.272759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.272830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.272958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.272995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.273121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.273156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.273352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.273388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.273582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.273616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.273738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.273771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.363 qpair failed and we were unable to recover it. 00:30:24.363 [2024-11-20 08:27:38.273974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.363 [2024-11-20 08:27:38.274008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.274121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.274154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.274359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.274394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.274677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.274711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.274834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.274868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.274994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.275028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.275155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.275188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.275395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.275438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.275631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.275665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.275782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.275815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.275949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.275983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.276106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.276139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.276271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.276307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.276495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.276531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.276710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.276744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.276869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.276903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.277018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.277052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.277229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.277264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.277387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.277420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.277605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.277639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.277831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.277865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.277986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.278020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.278128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.278161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.278306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.278342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.278521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.278554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.278674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.278707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.278895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.278929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.279169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.279213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.279432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.279466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.279699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.279732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.280008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.364 [2024-11-20 08:27:38.280041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.364 qpair failed and we were unable to recover it. 00:30:24.364 [2024-11-20 08:27:38.280262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.280296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.280554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.280588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.280806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.280839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.281077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.281150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.281318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.281358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.281490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.281523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.281727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.281760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.281892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.281926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.282033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.282067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.282279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.282314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.282428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.282461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.282591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.282625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.282759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.282792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.282903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.282935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.283114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.283148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.283352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.283387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.283610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.283653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.283840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.283873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.284054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.284087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.284270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.284304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.284500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.284534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.284731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.284765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.284955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.284989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.285110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.285142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.285269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.285305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.285446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.285479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.285654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.285687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.285822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.285855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.285961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.285994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.286190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.286237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.286355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.286389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.286652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.286685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.286879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.286912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.287102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.287136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.287254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.287289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.287478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.287511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.365 qpair failed and we were unable to recover it. 00:30:24.365 [2024-11-20 08:27:38.287688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.365 [2024-11-20 08:27:38.287722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.287856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.287889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.288073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.288106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.288232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.288266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.288387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.288420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.288602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.288636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.288827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.288861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.289032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.289104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.289244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.289283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.289414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.289447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.289562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.289596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.289721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.289753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.289998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.290031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.290238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.290275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.290453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.290485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.290592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.290623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.290812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.290845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.290974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.291006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.291137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.291170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.291294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.291328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.291448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.291479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.291602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.291635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.291737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.291769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.291945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.291977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.292105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.292138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.292256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.292291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.292402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.292436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.292681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.292714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.292907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.292940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.293129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.293162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.293304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.293338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.293453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.293487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.293729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.293763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.293880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.293918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.294112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.294150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.294347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.366 [2024-11-20 08:27:38.294381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.366 qpair failed and we were unable to recover it. 00:30:24.366 [2024-11-20 08:27:38.294552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.294584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.294801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.294835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.295031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.295063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.295251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.295286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.295467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.295500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.295677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.295709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.295829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.295861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.295983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.296016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.296137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.296169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.296303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.296343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.296473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.296507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.296624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.296659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.296701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:24.367 [2024-11-20 08:27:38.296783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.296816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.296940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.296979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.297089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.297122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.297310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.297347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.297457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.297490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.297680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.297714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.297853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.297887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.298080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.298114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.298302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.298337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.298528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.298561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.298749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.298782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.298975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.299009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.299136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.299170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.299399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.299433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.299572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.299606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.299712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.299746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.299927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.299960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.300099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.300133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.300248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.300283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.300410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.300443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.300567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.300602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.300727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.300760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.300869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.300904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.301024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.301058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.301234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.301270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.301409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.367 [2024-11-20 08:27:38.301444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.367 qpair failed and we were unable to recover it. 00:30:24.367 [2024-11-20 08:27:38.301559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.301598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.301707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.301740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.301946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.301980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.302092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.302127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.302323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.302360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.302491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.302525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.302707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.302741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.302848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.302891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.303028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.303064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.303245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.303281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.303396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.303438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.303575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.303610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.303728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.303761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.303987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.304021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.304220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.304257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.304396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.304432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.304681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.304715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.304962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.304996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.305174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.305238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.305437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.305471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.305681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.368 [2024-11-20 08:27:38.305717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.368 qpair failed and we were unable to recover it. 00:30:24.368 [2024-11-20 08:27:38.305988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.650 [2024-11-20 08:27:38.306024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.650 qpair failed and we were unable to recover it. 00:30:24.650 [2024-11-20 08:27:38.306147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.650 [2024-11-20 08:27:38.306181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.650 qpair failed and we were unable to recover it. 00:30:24.650 [2024-11-20 08:27:38.306303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.650 [2024-11-20 08:27:38.306340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.650 qpair failed and we were unable to recover it. 00:30:24.650 [2024-11-20 08:27:38.306484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.650 [2024-11-20 08:27:38.306520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.650 qpair failed and we were unable to recover it. 00:30:24.650 [2024-11-20 08:27:38.306705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.650 [2024-11-20 08:27:38.306741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.650 qpair failed and we were unable to recover it. 00:30:24.650 [2024-11-20 08:27:38.306935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.650 [2024-11-20 08:27:38.306970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.650 qpair failed and we were unable to recover it. 00:30:24.650 [2024-11-20 08:27:38.307110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.650 [2024-11-20 08:27:38.307146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.650 qpair failed and we were unable to recover it. 00:30:24.650 [2024-11-20 08:27:38.307277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.650 [2024-11-20 08:27:38.307314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.650 qpair failed and we were unable to recover it. 00:30:24.650 [2024-11-20 08:27:38.307442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.650 [2024-11-20 08:27:38.307479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.650 qpair failed and we were unable to recover it. 00:30:24.650 [2024-11-20 08:27:38.307607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.650 [2024-11-20 08:27:38.307641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.650 qpair failed and we were unable to recover it. 00:30:24.650 [2024-11-20 08:27:38.307789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.650 [2024-11-20 08:27:38.307823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.650 qpair failed and we were unable to recover it. 00:30:24.650 [2024-11-20 08:27:38.307978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.650 [2024-11-20 08:27:38.308012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.650 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.308186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.308232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.308364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.308398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.308521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.308555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.308684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.308719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.308832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.308865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.309056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.309090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.309276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.309312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.309487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.309527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.309636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.309671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.309796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.309830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.309948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.309983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.310154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.310187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.310320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.310355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.310533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.310569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.310687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.310721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.310894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.310927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.311109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.311143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.311343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.311377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.311485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.311519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.311653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.311687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.311804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.311837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.311951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.311985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.312172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.312216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.312403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.312438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.312551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.312585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.312783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.312819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.313002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.313036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.313177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.313224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.313346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.313379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.313564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.313598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.313726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.313760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.313870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.313904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.314089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.314123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.314245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.314280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.314462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.314497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.314671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.314707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.314831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.314865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.651 qpair failed and we were unable to recover it. 00:30:24.651 [2024-11-20 08:27:38.315000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.651 [2024-11-20 08:27:38.315035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.315163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.315197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.315383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.315418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.315550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.315584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.315863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.315899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.316094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.316128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.316310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.316345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.316477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.316513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.316707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.316741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.316872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.316906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.317027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.317067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.317179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.317246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.317365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.317399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.317512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.317545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.317730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.317764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.317972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.318006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.318178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.318223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.318466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.318502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.318638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.318672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.318805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.318839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.318984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.319019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.319146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.319181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.319366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.319402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.319523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.319566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.319693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.319727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.319975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.320009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.320122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.320155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.320283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.320319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.320451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.320485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.320704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.320739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.320862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.320897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.321088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.321122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.321243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.321279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.321425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.321459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.321579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.321613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.321733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.321773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.321907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.321943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.322077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.652 [2024-11-20 08:27:38.322113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.652 qpair failed and we were unable to recover it. 00:30:24.652 [2024-11-20 08:27:38.322328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.322364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.322506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.322541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.322656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.322689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.322871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.322905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.323041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.323076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.323254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.323289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.323477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.323512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.323621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.323656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.323847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.323880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.323995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.324029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.324299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.324335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.324469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.324502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.324683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.324722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.324904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.324939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.325148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.325183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.325387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.325421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.325557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.325591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.325793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.325826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.325944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.325977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.326153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.326187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.326398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.326433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.326634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.326667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.326785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.326819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.326926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.326960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.327073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.327106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.327302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.327339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.327474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.327508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.327693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.327728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.327913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.327947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.328189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.328232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.328343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.328378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.328577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.328611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.328748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.328782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.328902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.328936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.329057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.329092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.329281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.329317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.329459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.329492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.653 [2024-11-20 08:27:38.329609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.653 [2024-11-20 08:27:38.329644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.653 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.329828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.329862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.330057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.330093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.330346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.330382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.330496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.330530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.330652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.330687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.330799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.330834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.331048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.331082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.331265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.331301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.331430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.331465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.331582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.331616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.331727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.331760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.331944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.331979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.332173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.332227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.332340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.332374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.332487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.332526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.332645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.332680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.332877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.332911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.333036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.333070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.333198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.333243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.333365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.333399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.333588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.333622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.333737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.333772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.333893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.333927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.334200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.334242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.334366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.334405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.334541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.334578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.334704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.334741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.334869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.334908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.335096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.335132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.335247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.335282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.335387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.335422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.335555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.335588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.335791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.335828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.335950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.335986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.336097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.336132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.336244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.336280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.336473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.654 [2024-11-20 08:27:38.336510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.654 qpair failed and we were unable to recover it. 00:30:24.654 [2024-11-20 08:27:38.336622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.336658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.336854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.336891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.337071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.337104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.337225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.337260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.337333] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:24.655 [2024-11-20 08:27:38.337367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:24.655 [2024-11-20 08:27:38.337375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:24.655 [2024-11-20 08:27:38.337382] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:24.655 [2024-11-20 08:27:38.337389] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:24.655 [2024-11-20 08:27:38.337389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.337422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.337546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.337578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.337749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.337785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.337966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.338001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.338108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.338142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.338287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.338321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.338587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.338622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.338736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.338769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.338893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.338927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.338941] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:24.655 [2024-11-20 08:27:38.339056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.339090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.339046] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:24.655 [2024-11-20 08:27:38.339153] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:24.655 [2024-11-20 08:27:38.339217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.339257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.339154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:24.655 [2024-11-20 08:27:38.339370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.339403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.339527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.339562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.339749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.339784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.339912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.339947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.340073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.340109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.340305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.340341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.340612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.340646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.340780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.340814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.340928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.340963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.341146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.341181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.341371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.341406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.341516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.341550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.341663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.341704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.655 [2024-11-20 08:27:38.341881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.655 [2024-11-20 08:27:38.341922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.655 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.342053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.342087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.342197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.342252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.342372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.342407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.342585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.342620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.342732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.342767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.342888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.342922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.343052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.343087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.343232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.343269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.343486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.343521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.343714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.343749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.343870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.343903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.344016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.344051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.344164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.344221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.344341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.344376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.344498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.344531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.344729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.344763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.344895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.344931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.345059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.345093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.345223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.345259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.345382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.345416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.345534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.345571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.345762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.345797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.346039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.346073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.346192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.346241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.346427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.346465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.346630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.346683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.346800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.346835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.347018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.347053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.347156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.347189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.347315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.347349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.347463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.347497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.347709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.347742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.347925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.347960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.348147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.348183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.348374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.348409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.348526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.348565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.348695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.348729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.656 [2024-11-20 08:27:38.348924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.656 [2024-11-20 08:27:38.348959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.656 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.349148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.349192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.349399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.349434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.349559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.349593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.349837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.349872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.349998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.350033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.350220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.350268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.350407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.350443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.350570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.350603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.350779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.350815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.350940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.350975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.351096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.351132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.351254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.351291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.351473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.351509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.351701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.351736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.351891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.351928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.352060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.352095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.352220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.352255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.352382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.352422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.352542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.352576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.352692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.352728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.352843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.352879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.352994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.353029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.353215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.353252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.353367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.353403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.353527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.353562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.353696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.353732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.353909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.353944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.354130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.354198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.354474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.354511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.354706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.354741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.354931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.354965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.355076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.355109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.355219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.355254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.355363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.355397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.355520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.355553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.355664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.355698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.355809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.355843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.657 [2024-11-20 08:27:38.355953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.657 [2024-11-20 08:27:38.355987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.657 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.356163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.356197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.356429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.356465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.356585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.356621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.356738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.356771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.356882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.356918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.357024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.357059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.357178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.357226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.357355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.357390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.357527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.357577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.357703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.357739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.357849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.357884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.358014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.358048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.358242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.358279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.358406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.358440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.358683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.358716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.358836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.358869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.359008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.359042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.359149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.359183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.359324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.359356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.359472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.359507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.359684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.359717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.359891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.359923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.360101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.360135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.360245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.360279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.360405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.360437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.360549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.360581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.360753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.360786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.360972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.361006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.361194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.361238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.361359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.361393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.361517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.361550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.361676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.361710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.361889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.361923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.362039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.362072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.362200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.362248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.362374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.362422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.362598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.362633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.362896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.658 [2024-11-20 08:27:38.362930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.658 qpair failed and we were unable to recover it. 00:30:24.658 [2024-11-20 08:27:38.363221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.363254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.363377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.363410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.363605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.363639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.363762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.363794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.364001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.364035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.364238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.364281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.364421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.364454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.364589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.364622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.364731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.364764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.364891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.364925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.365051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.365087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.365335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.365370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.365598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.365631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.365769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.365803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.365910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.365954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.366237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.366273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.366461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.366495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.366671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.366704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.366834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.366877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.367003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.367038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.367154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.367188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.367324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.367358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.367546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.367579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.367758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.367793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.368004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.368038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.368229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.368265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.368449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.368483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.368599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.368633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.368753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.368787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.368975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.369010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.369225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.369263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.369406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.369442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.369573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.369608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.369729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.659 [2024-11-20 08:27:38.369763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.659 qpair failed and we were unable to recover it. 00:30:24.659 [2024-11-20 08:27:38.369952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.369987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.370192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.370236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.370365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.370398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.370543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.370577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.370754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.370788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.370904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.370939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.371068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.371101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.371322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.371358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.371515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.371549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.371663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.371697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.371875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.371910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.372044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.372093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.372233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.372268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.372384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.372418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.372546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.372581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.372847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.372882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.373055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.373090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.373220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.373259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.373393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.373428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.373610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.373646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.373764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.373800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.373996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.374030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.374143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.374177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.374380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.374415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.374595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.374639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.374843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.374878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.374999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.375032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.375217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.375252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.375451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.375484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.375610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.375643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.375772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.375805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.375987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.376021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.376144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.376177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.376310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.376344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.376517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.376551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.376671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.376704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.376822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.660 [2024-11-20 08:27:38.376855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.660 qpair failed and we were unable to recover it. 00:30:24.660 [2024-11-20 08:27:38.377056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.377090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.377232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.377267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.377447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.377481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.377594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.377626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.377809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.377842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.378015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.378049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.378195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.378237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.378461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.378497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.378618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.378653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.378768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.378801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.378918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.378952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.379120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.379155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.379292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.379328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.379462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.379496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.379636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.379690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.379822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.379856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.379971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.380005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.380181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.380231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.380359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.380394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.380572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.380604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.380788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.380822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.380931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.380964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.381069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.381104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.381284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.381319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.381446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.381480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.381676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.381710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.381836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.381869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.382061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.382105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.382347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.382382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.382525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.382559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.382753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.382787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.382971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.383004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.383270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.383305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.383482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.383516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.383692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.383726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.383930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.383965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.384097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.384131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.661 [2024-11-20 08:27:38.384331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.661 [2024-11-20 08:27:38.384365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.661 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.384493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.384526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.384723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.384758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.384899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.384935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.385218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.385255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.385378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.385412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.385553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.385587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.385704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.385738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.385933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.385967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.386119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.386153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.386309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.386346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.386544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.386578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.386703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.386737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.386923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.386958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.387087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.387121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.387340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.387376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.387505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.387539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.387721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.387761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.387884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.387918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.388103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.388137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.388412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.388448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.388691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.388726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.388901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.388936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.389121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.389155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.389316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.389353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.389541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.389576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.389766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.389800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.390079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.390119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.390255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.390287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.390476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.390508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.390750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.390785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.390987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.391022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.391284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.391319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.391511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.391547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.391746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.391781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.392027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.392060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.392306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.392342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.392481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.392515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.662 [2024-11-20 08:27:38.392638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.662 [2024-11-20 08:27:38.392672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.662 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.392904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.392938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.393126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.393160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.393362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.393399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.393593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.393628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.393758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.393792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.394001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.394036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.394216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.394254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.394446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.394480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.394605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.394639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.394775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.394809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.394942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.394976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.395231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.395267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.395403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.395436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.395557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.395590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.395747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.395780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.396040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.396073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.396318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.396353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.396486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.396518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.396763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.396802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.396931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.396964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.397215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.397251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.397518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.397551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.397736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.397770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.398019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.398053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.398313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.398349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.398615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.398649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.398889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.398922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.399105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.399139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.399359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.399394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.399529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.399564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.399750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.399783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.399984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.400018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.400156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.400190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.400339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.400373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.400586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.400621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.400805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.400839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.400954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.400987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.663 qpair failed and we were unable to recover it. 00:30:24.663 [2024-11-20 08:27:38.401251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.663 [2024-11-20 08:27:38.401287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.401456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.401489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.401626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.401660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.401774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.401807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.401943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.401976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.402092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.402125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.402317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.402353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.402597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.402631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.402802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.402836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.403097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.403131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.403267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.403303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.403490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.403523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.403647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.403681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.403864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.403899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.404032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.404064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.404184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.404229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.404365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.404398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.404523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.404556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.404681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.404714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.404855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.404889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.405061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.405094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.405299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.405340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.405450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.405484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.405622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.405655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.405859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.405893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.406105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.406139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.406369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.406405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.406542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.406576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.406692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.406726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.406996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.407031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.407244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.407279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.407454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.407487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.407629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.407663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.407965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.407999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.664 qpair failed and we were unable to recover it. 00:30:24.664 [2024-11-20 08:27:38.408271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.664 [2024-11-20 08:27:38.408306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.408491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.408526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.408764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.408798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.408981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.409014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.409310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.409346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.409534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.409567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.409745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.409778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.410029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.410062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.410349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.410385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.410579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.410612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.410811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.410844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.411104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.411138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.411427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.411462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.411640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.411674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.411919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.411953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.412085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.412118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.412330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.412365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.412526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.412558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.412770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.412804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.412941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.412975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.413125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.413158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.413375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.413411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.413597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.413632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.413772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.413805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.413988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.414021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.414261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.414296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.414490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.414523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.414720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.414759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.414968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.415001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.415186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.415246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.415387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.415421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.415549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.415582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.415828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.415862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.416050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.416084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.416261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.416296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.416559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.416593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.416744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.416778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.417005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.417039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.665 qpair failed and we were unable to recover it. 00:30:24.665 [2024-11-20 08:27:38.417332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.665 [2024-11-20 08:27:38.417367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.417506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.417540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.417725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.417759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.417979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.418013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.418212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.418247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.418458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.418491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.418631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.418665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.418886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.418919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.419114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.419147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.419349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.419385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.419599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.419633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.419761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.419794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.419995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.420028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.420318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.420353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.420584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.420618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.420810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.420843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.420966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.421001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.421262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.421297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.421469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.421502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.421694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.421728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.422010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.422044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.422305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.422340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.422525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.422559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.422767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.422800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.423003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.423036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.423275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.423311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.423459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.423492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.423708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.423742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.423862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.423896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.424079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.424118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.424365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.424402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.424645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.424679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.424820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.424853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.425059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.425092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.425277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.425312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.425517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.425550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.425681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.425714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.425999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.426034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.666 qpair failed and we were unable to recover it. 00:30:24.666 [2024-11-20 08:27:38.426228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.666 [2024-11-20 08:27:38.426262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.426440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.426473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.426670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.426704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.426902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.426936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.427056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.427090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.427365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.427401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.427593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.427627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.427836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.427869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.428059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.428092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.428298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.428334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.428550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.428583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.428860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.428894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.429068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.429102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.429342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.429377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.429578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.429612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.429791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.429825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.430034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.430067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.430323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.430359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.430605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.430639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.430770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.430802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.430989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.431021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.431248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.431282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.431430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.431464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.431596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.431629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.431848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.431881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.432062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.432095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.432301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.432338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.432582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.432614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.432737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.432771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.432958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.432992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.433164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.433197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.433409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.433449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.433623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.433657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.433780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.433814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.434067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.434101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.434323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.434357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.434611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.434644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.434838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.667 [2024-11-20 08:27:38.434872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.667 qpair failed and we were unable to recover it. 00:30:24.667 [2024-11-20 08:27:38.435006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.435040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.435327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.435363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.435560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.435594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.435735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.435768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.436069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.436102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.436333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.436369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.436489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.436522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.436733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.436767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.437081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.437114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.437291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.437326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.437469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.437502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.437629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.437664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.437900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.437935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.438147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.438181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.438327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.438361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.438582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.438616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.438902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.438936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.439155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.439189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.439434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.439468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.439666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.439699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.439981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.440015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.440286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.440322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.440542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.440575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.440716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.440751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.440867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.440900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.441207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.441243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:24.668 [2024-11-20 08:27:38.441382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.441416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.441610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.441644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.441771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.441804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.441982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.442016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.442277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.442314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:24.668 [2024-11-20 08:27:38.442467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.442501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.668 [2024-11-20 08:27:38.442671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.442706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.668 qpair failed and we were unable to recover it. 00:30:24.668 [2024-11-20 08:27:38.442897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.668 [2024-11-20 08:27:38.442931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.443107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.443140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.443350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.443389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.443631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.443663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.443870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.443903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.444102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.444134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.444373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.444407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.444647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.444679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.444888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.444920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.445127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.445159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.445417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.445451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.445642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.445674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.445871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.445904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.446164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.446198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.446340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.446372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.446639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.446673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.446916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.446949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.447152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.447183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.447403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.447437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.447619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.447651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.447864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.447897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.448038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.448070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.448243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.448278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.448490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.448522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.448736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.448769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.449045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.449078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.449324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.449357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.449558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.449590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.449797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.449829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.450042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.450075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.450258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.450292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.450480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.450512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.450806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.450864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.451097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.451143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.451395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.451433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.451555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.451587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.669 [2024-11-20 08:27:38.451743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.669 [2024-11-20 08:27:38.451776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.669 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.452012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.452045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.452235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.452277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.452470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.452505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.452717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.452749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.452958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.452992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.453257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.453306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.453479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.453523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.453693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.453731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.453990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.454022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.454160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.454195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.454402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.454437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.454623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.454658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.454939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.454972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.455164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.455196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.455371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.455406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.455609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.455640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.455826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.455857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.456121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.456154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.456383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.456416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.456604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.456637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.456756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.456790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.456972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.457005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.457156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.457189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.457330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.457363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.457560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.457591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.457783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.457815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.458080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.458112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.458398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.458433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.458583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.458615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.458808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.458840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.459078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.459110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.459390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.459425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.459623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.459655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.459773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.459805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.460021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.460053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.460257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.460291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.460432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.670 [2024-11-20 08:27:38.460465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.670 qpair failed and we were unable to recover it. 00:30:24.670 [2024-11-20 08:27:38.460667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.460699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.461041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.461074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.461273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.461307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.461503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.461536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.461683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.461722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.461985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.462018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.462238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.462273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.462469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.462501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.462632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.462664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.462808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.462840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.463050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.463082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.463253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.463288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.463408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.463440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.463569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.463600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.463790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.463823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.464008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.464040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.464332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.464366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.464555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.464587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.464788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.464820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.465099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.465131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.465378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.465413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.465549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.465581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.465727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.465759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.466011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.466042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.466291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.466326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.466543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.466575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.466687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.466719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.466957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.466989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.467179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.467225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.467368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.467399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.467526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.467558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.467709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.467743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.467961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.468001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.468221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.468269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.468480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.468528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.468676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.468726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.468894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.468940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.671 qpair failed and we were unable to recover it. 00:30:24.671 [2024-11-20 08:27:38.469163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.671 [2024-11-20 08:27:38.469223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.469452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.469498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.469661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.469698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.469858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.469891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.470104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.470139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.470303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.470337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.470486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.470519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.470654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.470696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.470942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.470974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.471253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.471289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.471490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.471524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.471724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.471755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.472084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.472116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.472248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.472281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.472455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.472487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.472618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.472651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.472837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.472869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.473104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.473136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.473330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.473365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.473578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.473610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.473744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.473776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.473981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.474013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.474268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.474303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.474441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.474473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.474607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.474640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.474758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.474790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.474927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.474960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.475161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.475193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.475430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.475464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.475599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.475631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.475848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.475881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.476052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.476085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.476332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.476367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.476563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.476597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.476803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.476837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.477105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.477138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.477328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.477361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.477621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.672 [2024-11-20 08:27:38.477653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.672 qpair failed and we were unable to recover it. 00:30:24.672 [2024-11-20 08:27:38.477779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.477811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.477997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.478029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.478163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.478194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.478347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.478380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.478511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.478543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.478749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.478780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:24.673 [2024-11-20 08:27:38.478977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.479012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.479291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.479327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:24.673 [2024-11-20 08:27:38.479478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.479517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.479646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.479677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.673 [2024-11-20 08:27:38.479977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.480011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.480147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.480179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.480386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.480419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.480601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.480633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.480848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.480881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.481050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.481082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.481305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.481338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.481465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.481496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.481621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.481652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.481853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.481885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.482081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.482112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.482328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.482363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.482494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.482526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.482633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.482665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.482867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.482899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.483115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.483147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.483292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.483325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.483462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.483495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.483626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.483658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.483973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.484005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.484193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.484237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.484356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.673 [2024-11-20 08:27:38.484388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.673 qpair failed and we were unable to recover it. 00:30:24.673 [2024-11-20 08:27:38.484532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.484563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.484746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.484778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.485068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.485127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.485381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.485419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.485612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.485646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.485779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.485812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.485991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.486023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.486277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.486313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.486510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.486543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.486727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.486760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.486953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.486986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.487187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.487237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.487451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.487485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.487680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.487714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.488004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.488038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.488175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.488223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.488435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.488470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.488614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.488649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.488844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.488878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.489150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.489186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.489425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.489461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.489678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.489712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.489914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.489949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.490191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.490242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.490452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.490487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.490700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.490735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.491002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.491036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.491181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.491231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.491444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.491478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.491674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.491713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.491939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.491972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.492193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.492243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.492384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.492418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.492611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.492644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.492942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.492976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.493117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.493151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.493403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.493439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.493566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.493599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.674 qpair failed and we were unable to recover it. 00:30:24.674 [2024-11-20 08:27:38.493811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.674 [2024-11-20 08:27:38.493844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.494045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.494078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.494255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.494291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.494507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.494540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.494655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.494689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.494828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.494860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.495059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.495093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.495344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.495380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.495620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.495654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.495784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.495818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.496060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.496093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.496354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.496389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.496582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.496615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.496871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.496905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.497095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.497128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.497404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.497439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.497685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.497718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.497940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.497976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.498173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.498224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.498478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.498511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.498654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.498687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.498818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.498852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.499102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.499135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.499309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.499344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.499478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.499512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.499657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.499690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.499824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.499857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.500046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.500081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.500302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.500335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.500456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.500490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.500732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.500766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.500957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.500991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.501241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.501277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.501415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.501449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.501646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.501680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.502008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.502042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.502242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.502277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.502422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.502456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.675 [2024-11-20 08:27:38.502646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.675 [2024-11-20 08:27:38.502680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.675 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.502882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.502916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.503096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.503130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.503298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.503334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.503536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.503570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.503709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.503743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.504001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.504035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.504221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.504269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.504380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.504414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.504538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.504571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.504701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.504734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.505030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.505064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.505298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.505333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.505527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.505561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.505683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.505717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.505844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.505878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.506012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.506045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.506235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.506269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.506377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.506411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.506608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.506641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.506839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.506873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.507145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.507179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.507408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.507442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.507587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.507620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.507747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.507781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.508068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.508102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.508225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.508260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.508397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.508431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.508647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.508682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.508891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.508924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.509070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.509104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.509227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.509261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.509412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.509446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.509586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.509621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.509762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.509795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.510070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.510104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.510297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.510332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.510521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.676 [2024-11-20 08:27:38.510555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.676 qpair failed and we were unable to recover it. 00:30:24.676 [2024-11-20 08:27:38.510687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.510721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.510935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.510968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.511182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.511244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.511447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.511481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.511677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.511710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.512026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.512059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.512292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.512326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.512568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.512603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.512744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.512778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.512974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.513009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.513325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.513377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.513510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.513543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.513682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.513715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.513918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.513951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.514193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.514240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.514433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.514467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.514651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.514685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.514890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.514923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.515108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.515142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.515284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.515318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.515464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.515497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.515679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.515712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.515914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.515947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.516135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.516176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.516382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.516416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.516612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.516646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.516843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.516877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.517121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.517155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.517364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.517401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.517581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.517615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.517810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.517844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.518113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.518147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.518431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.518467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.518682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.518717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.518965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.519000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.519248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.519285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.519436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.519470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.677 qpair failed and we were unable to recover it. 00:30:24.677 [2024-11-20 08:27:38.519616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.677 [2024-11-20 08:27:38.519650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.519836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.519870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.520012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.520045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.520255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.520290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.520435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.520468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.520587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.520621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.520734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.520768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.520971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.521004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.521196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.521237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.521383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.521418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.521589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.521623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.521837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.521870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.522158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.522192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.522409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.522453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.522652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.522685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.522909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.522943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 Malloc0 00:30:24.678 [2024-11-20 08:27:38.523177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.523223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.523462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.523495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.523691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.523723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.678 [2024-11-20 08:27:38.524001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.524035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:24.678 [2024-11-20 08:27:38.524281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.524317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.524575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.678 [2024-11-20 08:27:38.524609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.524747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.524780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.678 [2024-11-20 08:27:38.524908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.524943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.525227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.525261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.525450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.525483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.525670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.525703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.525991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.526024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.526226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.526261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.526375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.526409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.526531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.526563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.678 qpair failed and we were unable to recover it. 00:30:24.678 [2024-11-20 08:27:38.526798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.678 [2024-11-20 08:27:38.526831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.527022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.527055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.527259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.527293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.527482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.527515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.527735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.527768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.527956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.527990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.528213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.528248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc864000b90 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.528391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.528441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.528591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.528625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.528748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.528781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.528970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.529004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.529135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.529168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc870000b90 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.529384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.529423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.529697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.529731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.529868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.529902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.530167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.530209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.530389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.530422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.530617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.530621] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.679 [2024-11-20 08:27:38.530651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.530881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.530914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.531110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.531143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fc868000b90 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.531337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.531377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.531579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.531613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.531746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.531781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.532039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.532074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.532308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.532346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.532531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.532566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.532857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.532890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.533027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.533061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.533211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.533246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.533470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.533505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.533740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.533774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.533955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.533989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.534162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.534196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.534360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.534394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.534529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.534563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.534691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.534725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.534994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.535028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.679 [2024-11-20 08:27:38.535277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.679 [2024-11-20 08:27:38.535312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.679 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.535500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.535533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.535775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.535809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.536064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.536098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.536237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.536273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.536487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.536521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.536710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.536744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.536957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.536992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.537262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.537297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.537493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.537526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.537670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.537709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.537991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.538026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.538230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.538265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.538452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.538486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.538675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.538710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.538923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.538957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.539167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.539200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.680 [2024-11-20 08:27:38.539406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.539441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.539617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.539651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:24.680 [2024-11-20 08:27:38.539788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.539822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.680 [2024-11-20 08:27:38.540126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.540160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.680 [2024-11-20 08:27:38.540367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.540402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.540624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.540658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.541007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.541041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.541249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.541285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.541529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.541562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.541749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.541783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.542021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.542055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.542240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.542275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.542518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.542553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.542672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.542705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.542924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.542958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.543148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.543182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.543324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.543358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.543562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.543596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.680 qpair failed and we were unable to recover it. 00:30:24.680 [2024-11-20 08:27:38.543870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.680 [2024-11-20 08:27:38.543909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.544145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.544179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.544305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.544340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.544819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.544859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.545148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.545184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.545402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.545437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.545583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.545617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.545835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.545870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.545988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.546021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.546217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.546252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.546394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.546428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.546697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.546730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.546914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.546947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.547072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.547105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.547228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.547264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.681 [2024-11-20 08:27:38.547384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.547418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.547608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.547642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:24.681 [2024-11-20 08:27:38.547785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.547819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.681 [2024-11-20 08:27:38.548105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.548138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.681 [2024-11-20 08:27:38.548383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.548418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.548559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.548593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.548848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.548881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.549098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.549131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.549270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.549304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.549510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.549544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.549792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.549826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.550095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.550129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.550260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.550294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.550434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.550468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.550616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.550649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.550831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.550863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.551048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.551082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.551332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.551367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.551546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.551580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.551850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.551883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.552147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.681 [2024-11-20 08:27:38.552180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.681 qpair failed and we were unable to recover it. 00:30:24.681 [2024-11-20 08:27:38.552385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.552419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.552556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.552589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.552836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.552869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.553043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.553083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.553282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.553316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.553526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.553560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.553734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.553766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.553968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.554001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.554231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.554266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.554463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.554498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.554642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.554676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.554816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.554848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.555091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.555124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.555316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.555353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.682 [2024-11-20 08:27:38.555473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.555506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.555641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.555675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:24.682 [2024-11-20 08:27:38.555884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.555918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.682 [2024-11-20 08:27:38.556120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.556154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.682 [2024-11-20 08:27:38.556454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.556488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.556640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.556674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.556797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.556831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.557032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.557065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.557241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.557275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.557399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.557433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.557577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.557610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.557782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.557817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.558011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.558044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.558260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.558295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.558444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.558485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.558626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:30:24.682 [2024-11-20 08:27:38.558659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x191fba0 with addr=10.0.0.2, port=4420 00:30:24.682 qpair failed and we were unable to recover it. 00:30:24.682 [2024-11-20 08:27:38.558864] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:24.682 [2024-11-20 08:27:38.561314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.682 [2024-11-20 08:27:38.561424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.682 [2024-11-20 08:27:38.561471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.683 [2024-11-20 08:27:38.561496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.683 [2024-11-20 08:27:38.561515] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.683 [2024-11-20 08:27:38.561566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.683 qpair failed and we were unable to recover it. 00:30:24.683 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.683 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:24.683 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.683 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:24.683 [2024-11-20 08:27:38.571182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.683 [2024-11-20 08:27:38.571267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.683 [2024-11-20 08:27:38.571295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.683 [2024-11-20 08:27:38.571310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.683 [2024-11-20 08:27:38.571325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.683 [2024-11-20 08:27:38.571357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.683 qpair failed and we were unable to recover it. 00:30:24.683 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.683 08:27:38 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1860726 00:30:24.683 [2024-11-20 08:27:38.581218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.683 [2024-11-20 08:27:38.581303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.683 [2024-11-20 08:27:38.581322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.683 [2024-11-20 08:27:38.581332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.683 [2024-11-20 08:27:38.581341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.683 [2024-11-20 08:27:38.581366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.683 qpair failed and we were unable to recover it. 00:30:24.683 [2024-11-20 08:27:38.591277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.683 [2024-11-20 08:27:38.591380] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.683 [2024-11-20 08:27:38.591395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.683 [2024-11-20 08:27:38.591402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.683 [2024-11-20 08:27:38.591408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.683 [2024-11-20 08:27:38.591423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.683 qpair failed and we were unable to recover it. 00:30:24.683 [2024-11-20 08:27:38.601218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.683 [2024-11-20 08:27:38.601292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.683 [2024-11-20 08:27:38.601307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.683 [2024-11-20 08:27:38.601314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.683 [2024-11-20 08:27:38.601320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.683 [2024-11-20 08:27:38.601335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.683 qpair failed and we were unable to recover it. 00:30:24.683 [2024-11-20 08:27:38.611226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.683 [2024-11-20 08:27:38.611280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.683 [2024-11-20 08:27:38.611294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.683 [2024-11-20 08:27:38.611302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.683 [2024-11-20 08:27:38.611308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.683 [2024-11-20 08:27:38.611322] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.683 qpair failed and we were unable to recover it. 00:30:24.683 [2024-11-20 08:27:38.621275] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.683 [2024-11-20 08:27:38.621332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.683 [2024-11-20 08:27:38.621346] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.683 [2024-11-20 08:27:38.621354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.683 [2024-11-20 08:27:38.621361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.683 [2024-11-20 08:27:38.621376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.683 qpair failed and we were unable to recover it. 00:30:24.683 [2024-11-20 08:27:38.631277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.683 [2024-11-20 08:27:38.631341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.683 [2024-11-20 08:27:38.631355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.683 [2024-11-20 08:27:38.631363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.683 [2024-11-20 08:27:38.631369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.683 [2024-11-20 08:27:38.631384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.683 qpair failed and we were unable to recover it. 00:30:24.683 [2024-11-20 08:27:38.641323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.683 [2024-11-20 08:27:38.641422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.683 [2024-11-20 08:27:38.641437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.683 [2024-11-20 08:27:38.641444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.683 [2024-11-20 08:27:38.641450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.683 [2024-11-20 08:27:38.641465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.683 qpair failed and we were unable to recover it. 00:30:24.945 [2024-11-20 08:27:38.651365] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.945 [2024-11-20 08:27:38.651427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.945 [2024-11-20 08:27:38.651441] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.945 [2024-11-20 08:27:38.651448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.945 [2024-11-20 08:27:38.651455] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.945 [2024-11-20 08:27:38.651469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.945 qpair failed and we were unable to recover it. 00:30:24.945 [2024-11-20 08:27:38.661354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.945 [2024-11-20 08:27:38.661406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.945 [2024-11-20 08:27:38.661420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.945 [2024-11-20 08:27:38.661427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.945 [2024-11-20 08:27:38.661433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.945 [2024-11-20 08:27:38.661449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.945 qpair failed and we were unable to recover it. 00:30:24.945 [2024-11-20 08:27:38.671358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.945 [2024-11-20 08:27:38.671427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.945 [2024-11-20 08:27:38.671442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.945 [2024-11-20 08:27:38.671453] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.945 [2024-11-20 08:27:38.671459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.945 [2024-11-20 08:27:38.671474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.945 qpair failed and we were unable to recover it. 00:30:24.945 [2024-11-20 08:27:38.681417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.945 [2024-11-20 08:27:38.681475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.945 [2024-11-20 08:27:38.681489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.945 [2024-11-20 08:27:38.681496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.945 [2024-11-20 08:27:38.681503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.945 [2024-11-20 08:27:38.681518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.945 qpair failed and we were unable to recover it. 00:30:24.945 [2024-11-20 08:27:38.691418] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.945 [2024-11-20 08:27:38.691474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.945 [2024-11-20 08:27:38.691488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.945 [2024-11-20 08:27:38.691494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.945 [2024-11-20 08:27:38.691501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.945 [2024-11-20 08:27:38.691516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.945 qpair failed and we were unable to recover it. 00:30:24.945 [2024-11-20 08:27:38.701467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.945 [2024-11-20 08:27:38.701523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.945 [2024-11-20 08:27:38.701538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.945 [2024-11-20 08:27:38.701545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.945 [2024-11-20 08:27:38.701551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.945 [2024-11-20 08:27:38.701566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.945 qpair failed and we were unable to recover it. 00:30:24.945 [2024-11-20 08:27:38.711517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.945 [2024-11-20 08:27:38.711584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.945 [2024-11-20 08:27:38.711599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.945 [2024-11-20 08:27:38.711606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.945 [2024-11-20 08:27:38.711612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.945 [2024-11-20 08:27:38.711630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.945 qpair failed and we were unable to recover it. 00:30:24.945 [2024-11-20 08:27:38.721549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.945 [2024-11-20 08:27:38.721611] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.945 [2024-11-20 08:27:38.721625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.945 [2024-11-20 08:27:38.721632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.945 [2024-11-20 08:27:38.721638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.945 [2024-11-20 08:27:38.721653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.945 qpair failed and we were unable to recover it. 00:30:24.946 [2024-11-20 08:27:38.731583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.946 [2024-11-20 08:27:38.731639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.946 [2024-11-20 08:27:38.731652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.946 [2024-11-20 08:27:38.731659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.946 [2024-11-20 08:27:38.731666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.946 [2024-11-20 08:27:38.731680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.946 qpair failed and we were unable to recover it. 00:30:24.946 [2024-11-20 08:27:38.741571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.946 [2024-11-20 08:27:38.741626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.946 [2024-11-20 08:27:38.741641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.946 [2024-11-20 08:27:38.741648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.946 [2024-11-20 08:27:38.741655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.946 [2024-11-20 08:27:38.741669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.946 qpair failed and we were unable to recover it. 00:30:24.946 [2024-11-20 08:27:38.751608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.946 [2024-11-20 08:27:38.751668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.946 [2024-11-20 08:27:38.751682] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.946 [2024-11-20 08:27:38.751689] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.946 [2024-11-20 08:27:38.751695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.946 [2024-11-20 08:27:38.751710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.946 qpair failed and we were unable to recover it. 00:30:24.946 [2024-11-20 08:27:38.761633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.946 [2024-11-20 08:27:38.761695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.946 [2024-11-20 08:27:38.761709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.946 [2024-11-20 08:27:38.761717] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.946 [2024-11-20 08:27:38.761723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.946 [2024-11-20 08:27:38.761737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.946 qpair failed and we were unable to recover it. 00:30:24.946 [2024-11-20 08:27:38.771585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.946 [2024-11-20 08:27:38.771642] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.946 [2024-11-20 08:27:38.771656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.946 [2024-11-20 08:27:38.771663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.946 [2024-11-20 08:27:38.771669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.946 [2024-11-20 08:27:38.771683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.946 qpair failed and we were unable to recover it. 00:30:24.946 [2024-11-20 08:27:38.781688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.946 [2024-11-20 08:27:38.781746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.946 [2024-11-20 08:27:38.781760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.946 [2024-11-20 08:27:38.781767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.946 [2024-11-20 08:27:38.781774] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.946 [2024-11-20 08:27:38.781789] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.946 qpair failed and we were unable to recover it. 00:30:24.946 [2024-11-20 08:27:38.791725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.946 [2024-11-20 08:27:38.791782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.946 [2024-11-20 08:27:38.791799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.946 [2024-11-20 08:27:38.791806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.946 [2024-11-20 08:27:38.791813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.946 [2024-11-20 08:27:38.791829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.946 qpair failed and we were unable to recover it. 00:30:24.946 [2024-11-20 08:27:38.801756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.946 [2024-11-20 08:27:38.801830] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.946 [2024-11-20 08:27:38.801845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.946 [2024-11-20 08:27:38.801859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.946 [2024-11-20 08:27:38.801865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.946 [2024-11-20 08:27:38.801880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.946 qpair failed and we were unable to recover it. 00:30:24.946 [2024-11-20 08:27:38.811767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.946 [2024-11-20 08:27:38.811823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.946 [2024-11-20 08:27:38.811838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.946 [2024-11-20 08:27:38.811847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.946 [2024-11-20 08:27:38.811853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.946 [2024-11-20 08:27:38.811868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.946 qpair failed and we were unable to recover it. 00:30:24.946 [2024-11-20 08:27:38.821731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.946 [2024-11-20 08:27:38.821784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.946 [2024-11-20 08:27:38.821798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.946 [2024-11-20 08:27:38.821805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.946 [2024-11-20 08:27:38.821812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.946 [2024-11-20 08:27:38.821827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.947 qpair failed and we were unable to recover it. 00:30:24.947 [2024-11-20 08:27:38.831809] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.947 [2024-11-20 08:27:38.831870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.947 [2024-11-20 08:27:38.831884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.947 [2024-11-20 08:27:38.831891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.947 [2024-11-20 08:27:38.831897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.947 [2024-11-20 08:27:38.831911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.947 qpair failed and we were unable to recover it. 00:30:24.947 [2024-11-20 08:27:38.841863] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.947 [2024-11-20 08:27:38.841919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.947 [2024-11-20 08:27:38.841932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.947 [2024-11-20 08:27:38.841939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.947 [2024-11-20 08:27:38.841945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.947 [2024-11-20 08:27:38.841964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.947 qpair failed and we were unable to recover it. 00:30:24.947 [2024-11-20 08:27:38.851857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.947 [2024-11-20 08:27:38.851919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.947 [2024-11-20 08:27:38.851933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.947 [2024-11-20 08:27:38.851941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.947 [2024-11-20 08:27:38.851946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.947 [2024-11-20 08:27:38.851960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.947 qpair failed and we were unable to recover it. 00:30:24.947 [2024-11-20 08:27:38.861900] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.947 [2024-11-20 08:27:38.861958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.947 [2024-11-20 08:27:38.861971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.947 [2024-11-20 08:27:38.861979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.947 [2024-11-20 08:27:38.861985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.947 [2024-11-20 08:27:38.861999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.947 qpair failed and we were unable to recover it. 00:30:24.947 [2024-11-20 08:27:38.871947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.947 [2024-11-20 08:27:38.872001] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.947 [2024-11-20 08:27:38.872015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.947 [2024-11-20 08:27:38.872021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.947 [2024-11-20 08:27:38.872027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.947 [2024-11-20 08:27:38.872042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.947 qpair failed and we were unable to recover it. 00:30:24.947 [2024-11-20 08:27:38.881932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.947 [2024-11-20 08:27:38.881991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.947 [2024-11-20 08:27:38.882005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.947 [2024-11-20 08:27:38.882013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.947 [2024-11-20 08:27:38.882020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.947 [2024-11-20 08:27:38.882035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.947 qpair failed and we were unable to recover it. 00:30:24.947 [2024-11-20 08:27:38.892003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.947 [2024-11-20 08:27:38.892064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.947 [2024-11-20 08:27:38.892078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.947 [2024-11-20 08:27:38.892085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.947 [2024-11-20 08:27:38.892092] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.947 [2024-11-20 08:27:38.892106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.947 qpair failed and we were unable to recover it. 00:30:24.947 [2024-11-20 08:27:38.902018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.947 [2024-11-20 08:27:38.902074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.947 [2024-11-20 08:27:38.902088] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.947 [2024-11-20 08:27:38.902094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.947 [2024-11-20 08:27:38.902101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.947 [2024-11-20 08:27:38.902115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.947 qpair failed and we were unable to recover it. 00:30:24.947 [2024-11-20 08:27:38.912123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.947 [2024-11-20 08:27:38.912223] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.947 [2024-11-20 08:27:38.912239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.947 [2024-11-20 08:27:38.912246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.947 [2024-11-20 08:27:38.912252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.947 [2024-11-20 08:27:38.912267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.947 qpair failed and we were unable to recover it. 00:30:24.947 [2024-11-20 08:27:38.922090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.947 [2024-11-20 08:27:38.922151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.947 [2024-11-20 08:27:38.922165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.947 [2024-11-20 08:27:38.922173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.947 [2024-11-20 08:27:38.922179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.947 [2024-11-20 08:27:38.922194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.947 qpair failed and we were unable to recover it. 00:30:24.947 [2024-11-20 08:27:38.932141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.947 [2024-11-20 08:27:38.932205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.947 [2024-11-20 08:27:38.932220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.947 [2024-11-20 08:27:38.932230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.947 [2024-11-20 08:27:38.932237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.948 [2024-11-20 08:27:38.932252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.948 qpair failed and we were unable to recover it. 00:30:24.948 [2024-11-20 08:27:38.942168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.948 [2024-11-20 08:27:38.942221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.948 [2024-11-20 08:27:38.942235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.948 [2024-11-20 08:27:38.942242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.948 [2024-11-20 08:27:38.942248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.948 [2024-11-20 08:27:38.942263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.948 qpair failed and we were unable to recover it. 00:30:24.948 [2024-11-20 08:27:38.952179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.948 [2024-11-20 08:27:38.952245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.948 [2024-11-20 08:27:38.952259] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.948 [2024-11-20 08:27:38.952266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.948 [2024-11-20 08:27:38.952272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.948 [2024-11-20 08:27:38.952287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.948 qpair failed and we were unable to recover it. 00:30:24.948 [2024-11-20 08:27:38.962220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:24.948 [2024-11-20 08:27:38.962276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:24.948 [2024-11-20 08:27:38.962289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:24.948 [2024-11-20 08:27:38.962295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:24.948 [2024-11-20 08:27:38.962302] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:24.948 [2024-11-20 08:27:38.962317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:24.948 qpair failed and we were unable to recover it. 00:30:25.209 [2024-11-20 08:27:38.972251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.209 [2024-11-20 08:27:38.972309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.209 [2024-11-20 08:27:38.972324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.209 [2024-11-20 08:27:38.972331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.209 [2024-11-20 08:27:38.972338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.209 [2024-11-20 08:27:38.972356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.209 qpair failed and we were unable to recover it. 00:30:25.209 [2024-11-20 08:27:38.982327] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.209 [2024-11-20 08:27:38.982430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.209 [2024-11-20 08:27:38.982446] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.209 [2024-11-20 08:27:38.982452] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.209 [2024-11-20 08:27:38.982459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.209 [2024-11-20 08:27:38.982474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.209 qpair failed and we were unable to recover it. 00:30:25.209 [2024-11-20 08:27:38.992303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.209 [2024-11-20 08:27:38.992361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.209 [2024-11-20 08:27:38.992375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.209 [2024-11-20 08:27:38.992381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.209 [2024-11-20 08:27:38.992387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.209 [2024-11-20 08:27:38.992402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.209 qpair failed and we were unable to recover it. 00:30:25.209 [2024-11-20 08:27:39.002321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.209 [2024-11-20 08:27:39.002381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.209 [2024-11-20 08:27:39.002395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.209 [2024-11-20 08:27:39.002402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.209 [2024-11-20 08:27:39.002409] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.209 [2024-11-20 08:27:39.002423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.209 qpair failed and we were unable to recover it. 00:30:25.209 [2024-11-20 08:27:39.012364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.209 [2024-11-20 08:27:39.012422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.209 [2024-11-20 08:27:39.012436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.209 [2024-11-20 08:27:39.012444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.209 [2024-11-20 08:27:39.012450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.209 [2024-11-20 08:27:39.012465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.209 qpair failed and we were unable to recover it. 00:30:25.209 [2024-11-20 08:27:39.022414] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.209 [2024-11-20 08:27:39.022516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.209 [2024-11-20 08:27:39.022531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.209 [2024-11-20 08:27:39.022537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.210 [2024-11-20 08:27:39.022544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.210 [2024-11-20 08:27:39.022558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.210 qpair failed and we were unable to recover it. 00:30:25.210 [2024-11-20 08:27:39.032426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.210 [2024-11-20 08:27:39.032497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.210 [2024-11-20 08:27:39.032511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.210 [2024-11-20 08:27:39.032518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.210 [2024-11-20 08:27:39.032524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.210 [2024-11-20 08:27:39.032538] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.210 qpair failed and we were unable to recover it. 00:30:25.210 [2024-11-20 08:27:39.042445] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.210 [2024-11-20 08:27:39.042505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.210 [2024-11-20 08:27:39.042519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.210 [2024-11-20 08:27:39.042526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.210 [2024-11-20 08:27:39.042533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.210 [2024-11-20 08:27:39.042548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.210 qpair failed and we were unable to recover it. 00:30:25.210 [2024-11-20 08:27:39.052472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.210 [2024-11-20 08:27:39.052530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.210 [2024-11-20 08:27:39.052544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.210 [2024-11-20 08:27:39.052551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.210 [2024-11-20 08:27:39.052558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.210 [2024-11-20 08:27:39.052573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.210 qpair failed and we were unable to recover it. 00:30:25.210 [2024-11-20 08:27:39.062527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.210 [2024-11-20 08:27:39.062586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.210 [2024-11-20 08:27:39.062600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.210 [2024-11-20 08:27:39.062610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.210 [2024-11-20 08:27:39.062616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.210 [2024-11-20 08:27:39.062630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.210 qpair failed and we were unable to recover it. 00:30:25.210 [2024-11-20 08:27:39.072530] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.210 [2024-11-20 08:27:39.072593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.210 [2024-11-20 08:27:39.072607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.210 [2024-11-20 08:27:39.072614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.210 [2024-11-20 08:27:39.072620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.210 [2024-11-20 08:27:39.072635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.210 qpair failed and we were unable to recover it. 00:30:25.210 [2024-11-20 08:27:39.082555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.210 [2024-11-20 08:27:39.082610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.210 [2024-11-20 08:27:39.082624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.210 [2024-11-20 08:27:39.082631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.210 [2024-11-20 08:27:39.082637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.210 [2024-11-20 08:27:39.082652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.210 qpair failed and we were unable to recover it. 00:30:25.210 [2024-11-20 08:27:39.092726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.210 [2024-11-20 08:27:39.092777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.210 [2024-11-20 08:27:39.092791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.210 [2024-11-20 08:27:39.092798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.210 [2024-11-20 08:27:39.092805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.210 [2024-11-20 08:27:39.092820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.210 qpair failed and we were unable to recover it. 00:30:25.210 [2024-11-20 08:27:39.102601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.210 [2024-11-20 08:27:39.102698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.210 [2024-11-20 08:27:39.102712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.210 [2024-11-20 08:27:39.102719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.210 [2024-11-20 08:27:39.102725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.210 [2024-11-20 08:27:39.102743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.210 qpair failed and we were unable to recover it. 00:30:25.210 [2024-11-20 08:27:39.112698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.210 [2024-11-20 08:27:39.112806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.210 [2024-11-20 08:27:39.112821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.210 [2024-11-20 08:27:39.112829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.210 [2024-11-20 08:27:39.112835] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.210 [2024-11-20 08:27:39.112850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.210 qpair failed and we were unable to recover it. 00:30:25.210 [2024-11-20 08:27:39.122664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.210 [2024-11-20 08:27:39.122718] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.210 [2024-11-20 08:27:39.122732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.210 [2024-11-20 08:27:39.122739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.210 [2024-11-20 08:27:39.122746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.210 [2024-11-20 08:27:39.122760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.210 qpair failed and we were unable to recover it. 00:30:25.210 [2024-11-20 08:27:39.132737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.210 [2024-11-20 08:27:39.132795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.210 [2024-11-20 08:27:39.132809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.210 [2024-11-20 08:27:39.132817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.210 [2024-11-20 08:27:39.132823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.210 [2024-11-20 08:27:39.132838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.210 qpair failed and we were unable to recover it. 00:30:25.210 [2024-11-20 08:27:39.142721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.210 [2024-11-20 08:27:39.142790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.210 [2024-11-20 08:27:39.142804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.210 [2024-11-20 08:27:39.142812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.210 [2024-11-20 08:27:39.142818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.210 [2024-11-20 08:27:39.142833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.210 qpair failed and we were unable to recover it. 00:30:25.210 [2024-11-20 08:27:39.152765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.210 [2024-11-20 08:27:39.152824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.210 [2024-11-20 08:27:39.152837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.211 [2024-11-20 08:27:39.152844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.211 [2024-11-20 08:27:39.152850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.211 [2024-11-20 08:27:39.152865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.211 qpair failed and we were unable to recover it. 00:30:25.211 [2024-11-20 08:27:39.162790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.211 [2024-11-20 08:27:39.162845] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.211 [2024-11-20 08:27:39.162858] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.211 [2024-11-20 08:27:39.162865] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.211 [2024-11-20 08:27:39.162872] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.211 [2024-11-20 08:27:39.162886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.211 qpair failed and we were unable to recover it. 00:30:25.211 [2024-11-20 08:27:39.172817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.211 [2024-11-20 08:27:39.172871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.211 [2024-11-20 08:27:39.172884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.211 [2024-11-20 08:27:39.172891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.211 [2024-11-20 08:27:39.172897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.211 [2024-11-20 08:27:39.172912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.211 qpair failed and we were unable to recover it. 00:30:25.211 [2024-11-20 08:27:39.182887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.211 [2024-11-20 08:27:39.182944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.211 [2024-11-20 08:27:39.182959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.211 [2024-11-20 08:27:39.182967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.211 [2024-11-20 08:27:39.182973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.211 [2024-11-20 08:27:39.182988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.211 qpair failed and we were unable to recover it. 00:30:25.211 [2024-11-20 08:27:39.192937] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.211 [2024-11-20 08:27:39.193039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.211 [2024-11-20 08:27:39.193053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.211 [2024-11-20 08:27:39.193063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.211 [2024-11-20 08:27:39.193070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.211 [2024-11-20 08:27:39.193085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.211 qpair failed and we were unable to recover it. 00:30:25.211 [2024-11-20 08:27:39.202843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.211 [2024-11-20 08:27:39.202896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.211 [2024-11-20 08:27:39.202910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.211 [2024-11-20 08:27:39.202917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.211 [2024-11-20 08:27:39.202923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.211 [2024-11-20 08:27:39.202939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.211 qpair failed and we were unable to recover it. 00:30:25.211 [2024-11-20 08:27:39.212933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.211 [2024-11-20 08:27:39.212988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.211 [2024-11-20 08:27:39.213002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.211 [2024-11-20 08:27:39.213009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.211 [2024-11-20 08:27:39.213015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.211 [2024-11-20 08:27:39.213029] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.211 qpair failed and we were unable to recover it. 00:30:25.211 [2024-11-20 08:27:39.222995] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.211 [2024-11-20 08:27:39.223048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.211 [2024-11-20 08:27:39.223062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.211 [2024-11-20 08:27:39.223069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.211 [2024-11-20 08:27:39.223076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.211 [2024-11-20 08:27:39.223090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.211 qpair failed and we were unable to recover it. 00:30:25.471 [2024-11-20 08:27:39.233001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.471 [2024-11-20 08:27:39.233059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.471 [2024-11-20 08:27:39.233075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.471 [2024-11-20 08:27:39.233083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.471 [2024-11-20 08:27:39.233090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.471 [2024-11-20 08:27:39.233108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.471 qpair failed and we were unable to recover it. 00:30:25.471 [2024-11-20 08:27:39.243028] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.471 [2024-11-20 08:27:39.243088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.471 [2024-11-20 08:27:39.243103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.471 [2024-11-20 08:27:39.243111] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.471 [2024-11-20 08:27:39.243117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.471 [2024-11-20 08:27:39.243133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.471 qpair failed and we were unable to recover it. 00:30:25.471 [2024-11-20 08:27:39.253042] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.471 [2024-11-20 08:27:39.253100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.471 [2024-11-20 08:27:39.253114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.471 [2024-11-20 08:27:39.253121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.471 [2024-11-20 08:27:39.253127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.471 [2024-11-20 08:27:39.253142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.471 qpair failed and we were unable to recover it. 00:30:25.471 [2024-11-20 08:27:39.263117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.471 [2024-11-20 08:27:39.263172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.471 [2024-11-20 08:27:39.263185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.471 [2024-11-20 08:27:39.263193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.471 [2024-11-20 08:27:39.263199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.471 [2024-11-20 08:27:39.263217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.471 qpair failed and we were unable to recover it. 00:30:25.471 [2024-11-20 08:27:39.273111] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.471 [2024-11-20 08:27:39.273168] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.471 [2024-11-20 08:27:39.273182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.471 [2024-11-20 08:27:39.273188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.471 [2024-11-20 08:27:39.273195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.471 [2024-11-20 08:27:39.273215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.471 qpair failed and we were unable to recover it. 00:30:25.471 [2024-11-20 08:27:39.283135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.471 [2024-11-20 08:27:39.283192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.471 [2024-11-20 08:27:39.283209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.471 [2024-11-20 08:27:39.283217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.471 [2024-11-20 08:27:39.283223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.471 [2024-11-20 08:27:39.283237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.471 qpair failed and we were unable to recover it. 00:30:25.471 [2024-11-20 08:27:39.293141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.471 [2024-11-20 08:27:39.293210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.471 [2024-11-20 08:27:39.293224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.471 [2024-11-20 08:27:39.293232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.471 [2024-11-20 08:27:39.293238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.471 [2024-11-20 08:27:39.293253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.471 qpair failed and we were unable to recover it. 00:30:25.471 [2024-11-20 08:27:39.303183] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.471 [2024-11-20 08:27:39.303243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.471 [2024-11-20 08:27:39.303256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.471 [2024-11-20 08:27:39.303264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.471 [2024-11-20 08:27:39.303270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.471 [2024-11-20 08:27:39.303284] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.471 qpair failed and we were unable to recover it. 00:30:25.471 [2024-11-20 08:27:39.313247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.471 [2024-11-20 08:27:39.313307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.471 [2024-11-20 08:27:39.313322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.471 [2024-11-20 08:27:39.313330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.471 [2024-11-20 08:27:39.313336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.471 [2024-11-20 08:27:39.313351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.471 qpair failed and we were unable to recover it. 00:30:25.471 [2024-11-20 08:27:39.323255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.471 [2024-11-20 08:27:39.323310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.471 [2024-11-20 08:27:39.323324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.471 [2024-11-20 08:27:39.323336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.471 [2024-11-20 08:27:39.323342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.471 [2024-11-20 08:27:39.323357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.471 qpair failed and we were unable to recover it. 00:30:25.471 [2024-11-20 08:27:39.333306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.471 [2024-11-20 08:27:39.333363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.471 [2024-11-20 08:27:39.333377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.471 [2024-11-20 08:27:39.333384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.471 [2024-11-20 08:27:39.333391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.471 [2024-11-20 08:27:39.333405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.471 qpair failed and we were unable to recover it. 00:30:25.471 [2024-11-20 08:27:39.343330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.471 [2024-11-20 08:27:39.343388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.471 [2024-11-20 08:27:39.343403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.472 [2024-11-20 08:27:39.343410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.472 [2024-11-20 08:27:39.343417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.472 [2024-11-20 08:27:39.343432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.472 qpair failed and we were unable to recover it. 00:30:25.472 [2024-11-20 08:27:39.353431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.472 [2024-11-20 08:27:39.353495] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.472 [2024-11-20 08:27:39.353508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.472 [2024-11-20 08:27:39.353515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.472 [2024-11-20 08:27:39.353521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.472 [2024-11-20 08:27:39.353535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.472 qpair failed and we were unable to recover it. 00:30:25.472 [2024-11-20 08:27:39.363424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.472 [2024-11-20 08:27:39.363475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.472 [2024-11-20 08:27:39.363488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.472 [2024-11-20 08:27:39.363495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.472 [2024-11-20 08:27:39.363501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.472 [2024-11-20 08:27:39.363519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.472 qpair failed and we were unable to recover it. 00:30:25.472 [2024-11-20 08:27:39.373423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.472 [2024-11-20 08:27:39.373479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.472 [2024-11-20 08:27:39.373493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.472 [2024-11-20 08:27:39.373500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.472 [2024-11-20 08:27:39.373507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.472 [2024-11-20 08:27:39.373522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.472 qpair failed and we were unable to recover it. 00:30:25.472 [2024-11-20 08:27:39.383478] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.472 [2024-11-20 08:27:39.383527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.472 [2024-11-20 08:27:39.383541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.472 [2024-11-20 08:27:39.383547] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.472 [2024-11-20 08:27:39.383554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.472 [2024-11-20 08:27:39.383569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.472 qpair failed and we were unable to recover it. 00:30:25.472 [2024-11-20 08:27:39.393490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.472 [2024-11-20 08:27:39.393547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.472 [2024-11-20 08:27:39.393561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.472 [2024-11-20 08:27:39.393568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.472 [2024-11-20 08:27:39.393574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.472 [2024-11-20 08:27:39.393588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.472 qpair failed and we were unable to recover it. 00:30:25.472 [2024-11-20 08:27:39.403426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.472 [2024-11-20 08:27:39.403479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.472 [2024-11-20 08:27:39.403492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.472 [2024-11-20 08:27:39.403500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.472 [2024-11-20 08:27:39.403507] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.472 [2024-11-20 08:27:39.403522] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.472 qpair failed and we were unable to recover it. 00:30:25.472 [2024-11-20 08:27:39.413522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.472 [2024-11-20 08:27:39.413589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.472 [2024-11-20 08:27:39.413604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.472 [2024-11-20 08:27:39.413611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.472 [2024-11-20 08:27:39.413618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.472 [2024-11-20 08:27:39.413633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.472 qpair failed and we were unable to recover it. 00:30:25.472 [2024-11-20 08:27:39.423460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.472 [2024-11-20 08:27:39.423512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.472 [2024-11-20 08:27:39.423526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.472 [2024-11-20 08:27:39.423533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.472 [2024-11-20 08:27:39.423540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.472 [2024-11-20 08:27:39.423554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.472 qpair failed and we were unable to recover it. 00:30:25.472 [2024-11-20 08:27:39.433573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.472 [2024-11-20 08:27:39.433631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.472 [2024-11-20 08:27:39.433646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.472 [2024-11-20 08:27:39.433653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.472 [2024-11-20 08:27:39.433660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.472 [2024-11-20 08:27:39.433675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.472 qpair failed and we were unable to recover it. 00:30:25.472 [2024-11-20 08:27:39.443598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.472 [2024-11-20 08:27:39.443655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.472 [2024-11-20 08:27:39.443669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.472 [2024-11-20 08:27:39.443676] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.472 [2024-11-20 08:27:39.443682] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.472 [2024-11-20 08:27:39.443697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.472 qpair failed and we were unable to recover it. 00:30:25.472 [2024-11-20 08:27:39.453631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.472 [2024-11-20 08:27:39.453709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.472 [2024-11-20 08:27:39.453723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.472 [2024-11-20 08:27:39.453734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.472 [2024-11-20 08:27:39.453740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.472 [2024-11-20 08:27:39.453755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.472 qpair failed and we were unable to recover it. 00:30:25.472 [2024-11-20 08:27:39.463591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.472 [2024-11-20 08:27:39.463648] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.472 [2024-11-20 08:27:39.463661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.472 [2024-11-20 08:27:39.463668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.472 [2024-11-20 08:27:39.463675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.472 [2024-11-20 08:27:39.463689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.472 qpair failed and we were unable to recover it. 00:30:25.472 [2024-11-20 08:27:39.473651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.472 [2024-11-20 08:27:39.473713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.472 [2024-11-20 08:27:39.473728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.473 [2024-11-20 08:27:39.473735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.473 [2024-11-20 08:27:39.473742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.473 [2024-11-20 08:27:39.473757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.473 qpair failed and we were unable to recover it. 00:30:25.473 [2024-11-20 08:27:39.483700] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.473 [2024-11-20 08:27:39.483754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.473 [2024-11-20 08:27:39.483770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.473 [2024-11-20 08:27:39.483778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.473 [2024-11-20 08:27:39.483784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.473 [2024-11-20 08:27:39.483799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.473 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-20 08:27:39.493802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.733 [2024-11-20 08:27:39.493858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.733 [2024-11-20 08:27:39.493875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.733 [2024-11-20 08:27:39.493883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.733 [2024-11-20 08:27:39.493891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.733 [2024-11-20 08:27:39.493910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-20 08:27:39.503784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.733 [2024-11-20 08:27:39.503838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.733 [2024-11-20 08:27:39.503851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.733 [2024-11-20 08:27:39.503858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.733 [2024-11-20 08:27:39.503865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.733 [2024-11-20 08:27:39.503880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-20 08:27:39.513802] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.733 [2024-11-20 08:27:39.513900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.733 [2024-11-20 08:27:39.513914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.733 [2024-11-20 08:27:39.513921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.733 [2024-11-20 08:27:39.513927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.733 [2024-11-20 08:27:39.513942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-20 08:27:39.523829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.733 [2024-11-20 08:27:39.523882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.733 [2024-11-20 08:27:39.523896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.733 [2024-11-20 08:27:39.523903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.733 [2024-11-20 08:27:39.523910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.733 [2024-11-20 08:27:39.523925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-20 08:27:39.533790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.733 [2024-11-20 08:27:39.533846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.733 [2024-11-20 08:27:39.533859] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.733 [2024-11-20 08:27:39.533867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.733 [2024-11-20 08:27:39.533873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.733 [2024-11-20 08:27:39.533888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-20 08:27:39.543911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.733 [2024-11-20 08:27:39.543968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.733 [2024-11-20 08:27:39.543982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.733 [2024-11-20 08:27:39.543988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.733 [2024-11-20 08:27:39.543995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.733 [2024-11-20 08:27:39.544009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-20 08:27:39.553926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.733 [2024-11-20 08:27:39.553986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.733 [2024-11-20 08:27:39.554002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.733 [2024-11-20 08:27:39.554009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.733 [2024-11-20 08:27:39.554015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.733 [2024-11-20 08:27:39.554031] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-20 08:27:39.563946] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.733 [2024-11-20 08:27:39.564005] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.733 [2024-11-20 08:27:39.564019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.733 [2024-11-20 08:27:39.564027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.733 [2024-11-20 08:27:39.564033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.733 [2024-11-20 08:27:39.564048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-20 08:27:39.574000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.733 [2024-11-20 08:27:39.574056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.733 [2024-11-20 08:27:39.574070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.733 [2024-11-20 08:27:39.574078] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.733 [2024-11-20 08:27:39.574084] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.733 [2024-11-20 08:27:39.574099] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-20 08:27:39.583996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.733 [2024-11-20 08:27:39.584053] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.733 [2024-11-20 08:27:39.584066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.733 [2024-11-20 08:27:39.584077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.733 [2024-11-20 08:27:39.584083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.733 [2024-11-20 08:27:39.584097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-20 08:27:39.594047] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.733 [2024-11-20 08:27:39.594114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.733 [2024-11-20 08:27:39.594128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.733 [2024-11-20 08:27:39.594136] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.733 [2024-11-20 08:27:39.594141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.733 [2024-11-20 08:27:39.594157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.733 qpair failed and we were unable to recover it. 00:30:25.733 [2024-11-20 08:27:39.604086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.733 [2024-11-20 08:27:39.604165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.733 [2024-11-20 08:27:39.604179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.734 [2024-11-20 08:27:39.604188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.734 [2024-11-20 08:27:39.604194] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.734 [2024-11-20 08:27:39.604212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-20 08:27:39.614094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.734 [2024-11-20 08:27:39.614149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.734 [2024-11-20 08:27:39.614163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.734 [2024-11-20 08:27:39.614170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.734 [2024-11-20 08:27:39.614176] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.734 [2024-11-20 08:27:39.614191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-20 08:27:39.624105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.734 [2024-11-20 08:27:39.624160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.734 [2024-11-20 08:27:39.624175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.734 [2024-11-20 08:27:39.624182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.734 [2024-11-20 08:27:39.624189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.734 [2024-11-20 08:27:39.624214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-20 08:27:39.634061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.734 [2024-11-20 08:27:39.634119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.734 [2024-11-20 08:27:39.634134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.734 [2024-11-20 08:27:39.634142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.734 [2024-11-20 08:27:39.634149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.734 [2024-11-20 08:27:39.634163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-20 08:27:39.644185] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.734 [2024-11-20 08:27:39.644268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.734 [2024-11-20 08:27:39.644284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.734 [2024-11-20 08:27:39.644292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.734 [2024-11-20 08:27:39.644298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.734 [2024-11-20 08:27:39.644314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-20 08:27:39.654157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.734 [2024-11-20 08:27:39.654214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.734 [2024-11-20 08:27:39.654231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.734 [2024-11-20 08:27:39.654239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.734 [2024-11-20 08:27:39.654246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.734 [2024-11-20 08:27:39.654261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-20 08:27:39.664252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.734 [2024-11-20 08:27:39.664312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.734 [2024-11-20 08:27:39.664327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.734 [2024-11-20 08:27:39.664336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.734 [2024-11-20 08:27:39.664345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.734 [2024-11-20 08:27:39.664362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-20 08:27:39.674264] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.734 [2024-11-20 08:27:39.674325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.734 [2024-11-20 08:27:39.674339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.734 [2024-11-20 08:27:39.674345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.734 [2024-11-20 08:27:39.674352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.734 [2024-11-20 08:27:39.674367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-20 08:27:39.684281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.734 [2024-11-20 08:27:39.684337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.734 [2024-11-20 08:27:39.684351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.734 [2024-11-20 08:27:39.684359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.734 [2024-11-20 08:27:39.684367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.734 [2024-11-20 08:27:39.684384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-20 08:27:39.694299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.734 [2024-11-20 08:27:39.694353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.734 [2024-11-20 08:27:39.694368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.734 [2024-11-20 08:27:39.694375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.734 [2024-11-20 08:27:39.694381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.734 [2024-11-20 08:27:39.694396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-20 08:27:39.704314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.734 [2024-11-20 08:27:39.704366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.734 [2024-11-20 08:27:39.704380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.734 [2024-11-20 08:27:39.704387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.734 [2024-11-20 08:27:39.704393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.734 [2024-11-20 08:27:39.704408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-20 08:27:39.714291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.734 [2024-11-20 08:27:39.714348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.734 [2024-11-20 08:27:39.714368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.734 [2024-11-20 08:27:39.714375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.734 [2024-11-20 08:27:39.714381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.734 [2024-11-20 08:27:39.714396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-20 08:27:39.724443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.734 [2024-11-20 08:27:39.724561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.734 [2024-11-20 08:27:39.724582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.734 [2024-11-20 08:27:39.724591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.734 [2024-11-20 08:27:39.724598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.734 [2024-11-20 08:27:39.724615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.734 qpair failed and we were unable to recover it. 00:30:25.734 [2024-11-20 08:27:39.734330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.734 [2024-11-20 08:27:39.734399] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.734 [2024-11-20 08:27:39.734414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.735 [2024-11-20 08:27:39.734421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.735 [2024-11-20 08:27:39.734427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.735 [2024-11-20 08:27:39.734443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-20 08:27:39.744476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.735 [2024-11-20 08:27:39.744529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.735 [2024-11-20 08:27:39.744544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.735 [2024-11-20 08:27:39.744550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.735 [2024-11-20 08:27:39.744557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.735 [2024-11-20 08:27:39.744572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.735 [2024-11-20 08:27:39.754492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.735 [2024-11-20 08:27:39.754562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.735 [2024-11-20 08:27:39.754576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.735 [2024-11-20 08:27:39.754583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.735 [2024-11-20 08:27:39.754589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.735 [2024-11-20 08:27:39.754607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.735 qpair failed and we were unable to recover it. 00:30:25.995 [2024-11-20 08:27:39.764444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.995 [2024-11-20 08:27:39.764504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.995 [2024-11-20 08:27:39.764519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.995 [2024-11-20 08:27:39.764526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.995 [2024-11-20 08:27:39.764532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.995 [2024-11-20 08:27:39.764547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.995 qpair failed and we were unable to recover it. 00:30:25.995 [2024-11-20 08:27:39.774460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.995 [2024-11-20 08:27:39.774526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.995 [2024-11-20 08:27:39.774541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.995 [2024-11-20 08:27:39.774548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.995 [2024-11-20 08:27:39.774554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.995 [2024-11-20 08:27:39.774570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.995 qpair failed and we were unable to recover it. 00:30:25.995 [2024-11-20 08:27:39.784549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.995 [2024-11-20 08:27:39.784603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.995 [2024-11-20 08:27:39.784617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.995 [2024-11-20 08:27:39.784623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.995 [2024-11-20 08:27:39.784631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.995 [2024-11-20 08:27:39.784646] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.995 qpair failed and we were unable to recover it. 00:30:25.995 [2024-11-20 08:27:39.794511] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.995 [2024-11-20 08:27:39.794569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.995 [2024-11-20 08:27:39.794583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.995 [2024-11-20 08:27:39.794590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.995 [2024-11-20 08:27:39.794596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.995 [2024-11-20 08:27:39.794612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.995 qpair failed and we were unable to recover it. 00:30:25.995 [2024-11-20 08:27:39.804568] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.995 [2024-11-20 08:27:39.804621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.995 [2024-11-20 08:27:39.804635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.995 [2024-11-20 08:27:39.804642] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.995 [2024-11-20 08:27:39.804649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.995 [2024-11-20 08:27:39.804664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.995 qpair failed and we were unable to recover it. 00:30:25.995 [2024-11-20 08:27:39.814676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.995 [2024-11-20 08:27:39.814731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.995 [2024-11-20 08:27:39.814745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.995 [2024-11-20 08:27:39.814752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.995 [2024-11-20 08:27:39.814758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.995 [2024-11-20 08:27:39.814772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.995 qpair failed and we were unable to recover it. 00:30:25.995 [2024-11-20 08:27:39.824650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.995 [2024-11-20 08:27:39.824704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.995 [2024-11-20 08:27:39.824718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.995 [2024-11-20 08:27:39.824724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.995 [2024-11-20 08:27:39.824730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.995 [2024-11-20 08:27:39.824744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.995 qpair failed and we were unable to recover it. 00:30:25.995 [2024-11-20 08:27:39.834636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.996 [2024-11-20 08:27:39.834693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.996 [2024-11-20 08:27:39.834707] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.996 [2024-11-20 08:27:39.834713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.996 [2024-11-20 08:27:39.834719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.996 [2024-11-20 08:27:39.834734] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.996 qpair failed and we were unable to recover it. 00:30:25.996 [2024-11-20 08:27:39.844751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.996 [2024-11-20 08:27:39.844806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.996 [2024-11-20 08:27:39.844823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.996 [2024-11-20 08:27:39.844830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.996 [2024-11-20 08:27:39.844836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.996 [2024-11-20 08:27:39.844851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.996 qpair failed and we were unable to recover it. 00:30:25.996 [2024-11-20 08:27:39.854682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.996 [2024-11-20 08:27:39.854744] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.996 [2024-11-20 08:27:39.854758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.996 [2024-11-20 08:27:39.854765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.996 [2024-11-20 08:27:39.854772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.996 [2024-11-20 08:27:39.854787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.996 qpair failed and we were unable to recover it. 00:30:25.996 [2024-11-20 08:27:39.864833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.996 [2024-11-20 08:27:39.864886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.996 [2024-11-20 08:27:39.864899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.996 [2024-11-20 08:27:39.864906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.996 [2024-11-20 08:27:39.864913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.996 [2024-11-20 08:27:39.864928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.996 qpair failed and we were unable to recover it. 00:30:25.996 [2024-11-20 08:27:39.874799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.996 [2024-11-20 08:27:39.874878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.996 [2024-11-20 08:27:39.874892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.996 [2024-11-20 08:27:39.874898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.996 [2024-11-20 08:27:39.874904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.996 [2024-11-20 08:27:39.874919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.996 qpair failed and we were unable to recover it. 00:30:25.996 [2024-11-20 08:27:39.884773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.996 [2024-11-20 08:27:39.884866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.996 [2024-11-20 08:27:39.884880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.996 [2024-11-20 08:27:39.884887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.996 [2024-11-20 08:27:39.884893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.996 [2024-11-20 08:27:39.884911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.996 qpair failed and we were unable to recover it. 00:30:25.996 [2024-11-20 08:27:39.894886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.996 [2024-11-20 08:27:39.894974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.996 [2024-11-20 08:27:39.894988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.996 [2024-11-20 08:27:39.894996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.996 [2024-11-20 08:27:39.895002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.996 [2024-11-20 08:27:39.895017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.996 qpair failed and we were unable to recover it. 00:30:25.996 [2024-11-20 08:27:39.904918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.996 [2024-11-20 08:27:39.904976] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.996 [2024-11-20 08:27:39.904990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.996 [2024-11-20 08:27:39.904998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.996 [2024-11-20 08:27:39.905004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.996 [2024-11-20 08:27:39.905018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.996 qpair failed and we were unable to recover it. 00:30:25.996 [2024-11-20 08:27:39.914901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.996 [2024-11-20 08:27:39.914961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.996 [2024-11-20 08:27:39.914975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.996 [2024-11-20 08:27:39.914983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.996 [2024-11-20 08:27:39.914989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.996 [2024-11-20 08:27:39.915004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.996 qpair failed and we were unable to recover it. 00:30:25.996 [2024-11-20 08:27:39.924901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.996 [2024-11-20 08:27:39.924957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.996 [2024-11-20 08:27:39.924971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.996 [2024-11-20 08:27:39.924978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.996 [2024-11-20 08:27:39.924984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.996 [2024-11-20 08:27:39.924998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.996 qpair failed and we were unable to recover it. 00:30:25.996 [2024-11-20 08:27:39.935026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.996 [2024-11-20 08:27:39.935087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.996 [2024-11-20 08:27:39.935101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.996 [2024-11-20 08:27:39.935108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.996 [2024-11-20 08:27:39.935114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.996 [2024-11-20 08:27:39.935129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.996 qpair failed and we were unable to recover it. 00:30:25.996 [2024-11-20 08:27:39.944948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.996 [2024-11-20 08:27:39.945003] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.996 [2024-11-20 08:27:39.945016] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.996 [2024-11-20 08:27:39.945023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.996 [2024-11-20 08:27:39.945029] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.996 [2024-11-20 08:27:39.945044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.996 qpair failed and we were unable to recover it. 00:30:25.996 [2024-11-20 08:27:39.955052] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.996 [2024-11-20 08:27:39.955109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.996 [2024-11-20 08:27:39.955124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.996 [2024-11-20 08:27:39.955131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.996 [2024-11-20 08:27:39.955137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.996 [2024-11-20 08:27:39.955151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.996 qpair failed and we were unable to recover it. 00:30:25.996 [2024-11-20 08:27:39.965044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.997 [2024-11-20 08:27:39.965101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.997 [2024-11-20 08:27:39.965116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.997 [2024-11-20 08:27:39.965122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.997 [2024-11-20 08:27:39.965128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.997 [2024-11-20 08:27:39.965143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.997 qpair failed and we were unable to recover it. 00:30:25.997 [2024-11-20 08:27:39.975031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.997 [2024-11-20 08:27:39.975088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.997 [2024-11-20 08:27:39.975105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.997 [2024-11-20 08:27:39.975112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.997 [2024-11-20 08:27:39.975119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.997 [2024-11-20 08:27:39.975134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.997 qpair failed and we were unable to recover it. 00:30:25.997 [2024-11-20 08:27:39.985114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.997 [2024-11-20 08:27:39.985172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.997 [2024-11-20 08:27:39.985187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.997 [2024-11-20 08:27:39.985194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.997 [2024-11-20 08:27:39.985200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.997 [2024-11-20 08:27:39.985220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.997 qpair failed and we were unable to recover it. 00:30:25.997 [2024-11-20 08:27:39.995094] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.997 [2024-11-20 08:27:39.995150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.997 [2024-11-20 08:27:39.995166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.997 [2024-11-20 08:27:39.995173] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.997 [2024-11-20 08:27:39.995180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.997 [2024-11-20 08:27:39.995194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.997 qpair failed and we were unable to recover it. 00:30:25.997 [2024-11-20 08:27:40.005168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.997 [2024-11-20 08:27:40.005229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.997 [2024-11-20 08:27:40.005244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.997 [2024-11-20 08:27:40.005252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.997 [2024-11-20 08:27:40.005258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.997 [2024-11-20 08:27:40.005273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.997 qpair failed and we were unable to recover it. 00:30:25.997 [2024-11-20 08:27:40.015167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:25.997 [2024-11-20 08:27:40.015231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:25.997 [2024-11-20 08:27:40.015246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:25.997 [2024-11-20 08:27:40.015254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:25.997 [2024-11-20 08:27:40.015261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:25.997 [2024-11-20 08:27:40.015280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:25.997 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-20 08:27:40.025386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.257 [2024-11-20 08:27:40.025448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.257 [2024-11-20 08:27:40.025469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.257 [2024-11-20 08:27:40.025477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.257 [2024-11-20 08:27:40.025484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.257 [2024-11-20 08:27:40.025502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-20 08:27:40.035240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.257 [2024-11-20 08:27:40.035323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.257 [2024-11-20 08:27:40.035339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.257 [2024-11-20 08:27:40.035346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.257 [2024-11-20 08:27:40.035352] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.257 [2024-11-20 08:27:40.035368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-20 08:27:40.045374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.257 [2024-11-20 08:27:40.045426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.257 [2024-11-20 08:27:40.045442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.257 [2024-11-20 08:27:40.045450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.257 [2024-11-20 08:27:40.045457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.257 [2024-11-20 08:27:40.045473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-20 08:27:40.055369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.257 [2024-11-20 08:27:40.055423] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.257 [2024-11-20 08:27:40.055438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.257 [2024-11-20 08:27:40.055445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.257 [2024-11-20 08:27:40.055452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.257 [2024-11-20 08:27:40.055467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-20 08:27:40.065366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.257 [2024-11-20 08:27:40.065424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.257 [2024-11-20 08:27:40.065438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.257 [2024-11-20 08:27:40.065445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.257 [2024-11-20 08:27:40.065452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.257 [2024-11-20 08:27:40.065466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-20 08:27:40.075429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.257 [2024-11-20 08:27:40.075500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.257 [2024-11-20 08:27:40.075515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.257 [2024-11-20 08:27:40.075522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.257 [2024-11-20 08:27:40.075528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.257 [2024-11-20 08:27:40.075544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-20 08:27:40.085456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.257 [2024-11-20 08:27:40.085508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.257 [2024-11-20 08:27:40.085525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.257 [2024-11-20 08:27:40.085533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.257 [2024-11-20 08:27:40.085539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.257 [2024-11-20 08:27:40.085556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-20 08:27:40.095486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.257 [2024-11-20 08:27:40.095539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.257 [2024-11-20 08:27:40.095555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.257 [2024-11-20 08:27:40.095563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.257 [2024-11-20 08:27:40.095571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.257 [2024-11-20 08:27:40.095587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-20 08:27:40.105505] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.257 [2024-11-20 08:27:40.105579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.257 [2024-11-20 08:27:40.105598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.257 [2024-11-20 08:27:40.105606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.257 [2024-11-20 08:27:40.105612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.257 [2024-11-20 08:27:40.105628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.257 qpair failed and we were unable to recover it. 00:30:26.257 [2024-11-20 08:27:40.115469] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.257 [2024-11-20 08:27:40.115535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.257 [2024-11-20 08:27:40.115549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.257 [2024-11-20 08:27:40.115557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.258 [2024-11-20 08:27:40.115563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.258 [2024-11-20 08:27:40.115578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-20 08:27:40.125571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.258 [2024-11-20 08:27:40.125627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.258 [2024-11-20 08:27:40.125641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.258 [2024-11-20 08:27:40.125648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.258 [2024-11-20 08:27:40.125655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.258 [2024-11-20 08:27:40.125669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-20 08:27:40.135605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.258 [2024-11-20 08:27:40.135662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.258 [2024-11-20 08:27:40.135676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.258 [2024-11-20 08:27:40.135682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.258 [2024-11-20 08:27:40.135689] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.258 [2024-11-20 08:27:40.135703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-20 08:27:40.145647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.258 [2024-11-20 08:27:40.145706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.258 [2024-11-20 08:27:40.145719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.258 [2024-11-20 08:27:40.145726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.258 [2024-11-20 08:27:40.145732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.258 [2024-11-20 08:27:40.145749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-20 08:27:40.155645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.258 [2024-11-20 08:27:40.155712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.258 [2024-11-20 08:27:40.155726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.258 [2024-11-20 08:27:40.155733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.258 [2024-11-20 08:27:40.155739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.258 [2024-11-20 08:27:40.155753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-20 08:27:40.165675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.258 [2024-11-20 08:27:40.165732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.258 [2024-11-20 08:27:40.165746] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.258 [2024-11-20 08:27:40.165753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.258 [2024-11-20 08:27:40.165759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.258 [2024-11-20 08:27:40.165774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-20 08:27:40.175698] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.258 [2024-11-20 08:27:40.175753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.258 [2024-11-20 08:27:40.175767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.258 [2024-11-20 08:27:40.175774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.258 [2024-11-20 08:27:40.175781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.258 [2024-11-20 08:27:40.175795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-20 08:27:40.185630] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.258 [2024-11-20 08:27:40.185685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.258 [2024-11-20 08:27:40.185701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.258 [2024-11-20 08:27:40.185708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.258 [2024-11-20 08:27:40.185715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.258 [2024-11-20 08:27:40.185729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-20 08:27:40.195750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.258 [2024-11-20 08:27:40.195807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.258 [2024-11-20 08:27:40.195823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.258 [2024-11-20 08:27:40.195829] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.258 [2024-11-20 08:27:40.195836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.258 [2024-11-20 08:27:40.195850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-20 08:27:40.205766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.258 [2024-11-20 08:27:40.205832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.258 [2024-11-20 08:27:40.205847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.258 [2024-11-20 08:27:40.205854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.258 [2024-11-20 08:27:40.205860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.258 [2024-11-20 08:27:40.205874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-20 08:27:40.215837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.258 [2024-11-20 08:27:40.215886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.258 [2024-11-20 08:27:40.215900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.258 [2024-11-20 08:27:40.215907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.258 [2024-11-20 08:27:40.215913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.258 [2024-11-20 08:27:40.215928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-20 08:27:40.225823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.258 [2024-11-20 08:27:40.225879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.258 [2024-11-20 08:27:40.225893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.258 [2024-11-20 08:27:40.225900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.258 [2024-11-20 08:27:40.225906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.258 [2024-11-20 08:27:40.225921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-20 08:27:40.235824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.258 [2024-11-20 08:27:40.235881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.258 [2024-11-20 08:27:40.235901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.258 [2024-11-20 08:27:40.235908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.258 [2024-11-20 08:27:40.235915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.258 [2024-11-20 08:27:40.235929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.258 qpair failed and we were unable to recover it. 00:30:26.258 [2024-11-20 08:27:40.245889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.258 [2024-11-20 08:27:40.245944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.258 [2024-11-20 08:27:40.245958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.258 [2024-11-20 08:27:40.245965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.259 [2024-11-20 08:27:40.245971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.259 [2024-11-20 08:27:40.245986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-20 08:27:40.255971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.259 [2024-11-20 08:27:40.256051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.259 [2024-11-20 08:27:40.256065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.259 [2024-11-20 08:27:40.256072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.259 [2024-11-20 08:27:40.256078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.259 [2024-11-20 08:27:40.256092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-20 08:27:40.265933] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.259 [2024-11-20 08:27:40.265981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.259 [2024-11-20 08:27:40.265995] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.259 [2024-11-20 08:27:40.266002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.259 [2024-11-20 08:27:40.266008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.259 [2024-11-20 08:27:40.266023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.259 [2024-11-20 08:27:40.276018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.259 [2024-11-20 08:27:40.276119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.259 [2024-11-20 08:27:40.276133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.259 [2024-11-20 08:27:40.276140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.259 [2024-11-20 08:27:40.276149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.259 [2024-11-20 08:27:40.276164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.259 qpair failed and we were unable to recover it. 00:30:26.518 [2024-11-20 08:27:40.286003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.518 [2024-11-20 08:27:40.286058] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.518 [2024-11-20 08:27:40.286072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.518 [2024-11-20 08:27:40.286079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.518 [2024-11-20 08:27:40.286085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.518 [2024-11-20 08:27:40.286100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.518 qpair failed and we were unable to recover it. 00:30:26.518 [2024-11-20 08:27:40.296034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.518 [2024-11-20 08:27:40.296109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.518 [2024-11-20 08:27:40.296123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.518 [2024-11-20 08:27:40.296130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.518 [2024-11-20 08:27:40.296136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.518 [2024-11-20 08:27:40.296151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.518 qpair failed and we were unable to recover it. 00:30:26.518 [2024-11-20 08:27:40.306058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.518 [2024-11-20 08:27:40.306109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.518 [2024-11-20 08:27:40.306124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.518 [2024-11-20 08:27:40.306131] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.518 [2024-11-20 08:27:40.306137] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.518 [2024-11-20 08:27:40.306152] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.518 qpair failed and we were unable to recover it. 00:30:26.518 [2024-11-20 08:27:40.316079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.518 [2024-11-20 08:27:40.316133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.518 [2024-11-20 08:27:40.316147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.518 [2024-11-20 08:27:40.316154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.518 [2024-11-20 08:27:40.316160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.518 [2024-11-20 08:27:40.316173] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.518 qpair failed and we were unable to recover it. 00:30:26.518 [2024-11-20 08:27:40.326109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.518 [2024-11-20 08:27:40.326162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.518 [2024-11-20 08:27:40.326176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.518 [2024-11-20 08:27:40.326183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.518 [2024-11-20 08:27:40.326190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.518 [2024-11-20 08:27:40.326209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.518 qpair failed and we were unable to recover it. 00:30:26.518 [2024-11-20 08:27:40.336141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.518 [2024-11-20 08:27:40.336192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.518 [2024-11-20 08:27:40.336211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.518 [2024-11-20 08:27:40.336218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.518 [2024-11-20 08:27:40.336225] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.518 [2024-11-20 08:27:40.336239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.518 qpair failed and we were unable to recover it. 00:30:26.518 [2024-11-20 08:27:40.346135] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.518 [2024-11-20 08:27:40.346189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.518 [2024-11-20 08:27:40.346207] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.518 [2024-11-20 08:27:40.346214] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.518 [2024-11-20 08:27:40.346220] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.519 [2024-11-20 08:27:40.346234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.519 qpair failed and we were unable to recover it. 00:30:26.519 [2024-11-20 08:27:40.356188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.519 [2024-11-20 08:27:40.356253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.519 [2024-11-20 08:27:40.356270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.519 [2024-11-20 08:27:40.356278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.519 [2024-11-20 08:27:40.356284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.519 [2024-11-20 08:27:40.356300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.519 qpair failed and we were unable to recover it. 00:30:26.519 [2024-11-20 08:27:40.366222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.519 [2024-11-20 08:27:40.366280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.519 [2024-11-20 08:27:40.366302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.519 [2024-11-20 08:27:40.366310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.519 [2024-11-20 08:27:40.366316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.519 [2024-11-20 08:27:40.366332] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.519 qpair failed and we were unable to recover it. 00:30:26.519 [2024-11-20 08:27:40.376244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.519 [2024-11-20 08:27:40.376305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.519 [2024-11-20 08:27:40.376322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.519 [2024-11-20 08:27:40.376330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.519 [2024-11-20 08:27:40.376336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.519 [2024-11-20 08:27:40.376352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.519 qpair failed and we were unable to recover it. 00:30:26.519 [2024-11-20 08:27:40.386263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.519 [2024-11-20 08:27:40.386319] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.519 [2024-11-20 08:27:40.386335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.519 [2024-11-20 08:27:40.386343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.519 [2024-11-20 08:27:40.386349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.519 [2024-11-20 08:27:40.386364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.519 qpair failed and we were unable to recover it. 00:30:26.519 [2024-11-20 08:27:40.396299] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.519 [2024-11-20 08:27:40.396357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.519 [2024-11-20 08:27:40.396374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.519 [2024-11-20 08:27:40.396382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.519 [2024-11-20 08:27:40.396389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.519 [2024-11-20 08:27:40.396405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.519 qpair failed and we were unable to recover it. 00:30:26.519 [2024-11-20 08:27:40.406350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.519 [2024-11-20 08:27:40.406408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.519 [2024-11-20 08:27:40.406424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.519 [2024-11-20 08:27:40.406432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.519 [2024-11-20 08:27:40.406442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.519 [2024-11-20 08:27:40.406458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.519 qpair failed and we were unable to recover it. 00:30:26.519 [2024-11-20 08:27:40.416388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.519 [2024-11-20 08:27:40.416444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.519 [2024-11-20 08:27:40.416460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.519 [2024-11-20 08:27:40.416467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.519 [2024-11-20 08:27:40.416474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.519 [2024-11-20 08:27:40.416490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.519 qpair failed and we were unable to recover it. 00:30:26.519 [2024-11-20 08:27:40.426308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.519 [2024-11-20 08:27:40.426362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.519 [2024-11-20 08:27:40.426379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.519 [2024-11-20 08:27:40.426386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.519 [2024-11-20 08:27:40.426393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.519 [2024-11-20 08:27:40.426408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.519 qpair failed and we were unable to recover it. 00:30:26.519 [2024-11-20 08:27:40.436420] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.519 [2024-11-20 08:27:40.436475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.519 [2024-11-20 08:27:40.436492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.519 [2024-11-20 08:27:40.436499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.519 [2024-11-20 08:27:40.436506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.519 [2024-11-20 08:27:40.436521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.519 qpair failed and we were unable to recover it. 00:30:26.519 [2024-11-20 08:27:40.446441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.519 [2024-11-20 08:27:40.446503] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.519 [2024-11-20 08:27:40.446519] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.519 [2024-11-20 08:27:40.446526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.519 [2024-11-20 08:27:40.446532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.519 [2024-11-20 08:27:40.446548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.519 qpair failed and we were unable to recover it. 00:30:26.519 [2024-11-20 08:27:40.456455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.519 [2024-11-20 08:27:40.456539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.519 [2024-11-20 08:27:40.456554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.519 [2024-11-20 08:27:40.456561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.519 [2024-11-20 08:27:40.456567] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.519 [2024-11-20 08:27:40.456582] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.519 qpair failed and we were unable to recover it. 00:30:26.519 [2024-11-20 08:27:40.466493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.519 [2024-11-20 08:27:40.466547] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.519 [2024-11-20 08:27:40.466561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.519 [2024-11-20 08:27:40.466568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.519 [2024-11-20 08:27:40.466574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.519 [2024-11-20 08:27:40.466589] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.519 qpair failed and we were unable to recover it. 00:30:26.519 [2024-11-20 08:27:40.476503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.519 [2024-11-20 08:27:40.476558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.519 [2024-11-20 08:27:40.476573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.519 [2024-11-20 08:27:40.476579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.519 [2024-11-20 08:27:40.476586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.520 [2024-11-20 08:27:40.476601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.520 qpair failed and we were unable to recover it. 00:30:26.520 [2024-11-20 08:27:40.486545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.520 [2024-11-20 08:27:40.486609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.520 [2024-11-20 08:27:40.486624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.520 [2024-11-20 08:27:40.486631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.520 [2024-11-20 08:27:40.486638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.520 [2024-11-20 08:27:40.486653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.520 qpair failed and we were unable to recover it. 00:30:26.520 [2024-11-20 08:27:40.496580] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.520 [2024-11-20 08:27:40.496632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.520 [2024-11-20 08:27:40.496649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.520 [2024-11-20 08:27:40.496657] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.520 [2024-11-20 08:27:40.496664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.520 [2024-11-20 08:27:40.496679] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.520 qpair failed and we were unable to recover it. 00:30:26.520 [2024-11-20 08:27:40.506602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.520 [2024-11-20 08:27:40.506655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.520 [2024-11-20 08:27:40.506669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.520 [2024-11-20 08:27:40.506677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.520 [2024-11-20 08:27:40.506683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.520 [2024-11-20 08:27:40.506698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.520 qpair failed and we were unable to recover it. 00:30:26.520 [2024-11-20 08:27:40.516631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.520 [2024-11-20 08:27:40.516688] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.520 [2024-11-20 08:27:40.516704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.520 [2024-11-20 08:27:40.516711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.520 [2024-11-20 08:27:40.516717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.520 [2024-11-20 08:27:40.516732] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.520 qpair failed and we were unable to recover it. 00:30:26.520 [2024-11-20 08:27:40.526658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.520 [2024-11-20 08:27:40.526714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.520 [2024-11-20 08:27:40.526728] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.520 [2024-11-20 08:27:40.526735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.520 [2024-11-20 08:27:40.526741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.520 [2024-11-20 08:27:40.526755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.520 qpair failed and we were unable to recover it. 00:30:26.520 [2024-11-20 08:27:40.536707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.520 [2024-11-20 08:27:40.536767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.520 [2024-11-20 08:27:40.536780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.520 [2024-11-20 08:27:40.536787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.520 [2024-11-20 08:27:40.536797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.520 [2024-11-20 08:27:40.536811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.520 qpair failed and we were unable to recover it. 00:30:26.779 [2024-11-20 08:27:40.546736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.779 [2024-11-20 08:27:40.546789] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.779 [2024-11-20 08:27:40.546806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.779 [2024-11-20 08:27:40.546813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.779 [2024-11-20 08:27:40.546820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.779 [2024-11-20 08:27:40.546835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.779 qpair failed and we were unable to recover it. 00:30:26.779 [2024-11-20 08:27:40.556752] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.779 [2024-11-20 08:27:40.556807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.779 [2024-11-20 08:27:40.556821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.779 [2024-11-20 08:27:40.556828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.780 [2024-11-20 08:27:40.556834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.780 [2024-11-20 08:27:40.556850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.780 qpair failed and we were unable to recover it. 00:30:26.780 [2024-11-20 08:27:40.566771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.780 [2024-11-20 08:27:40.566827] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.780 [2024-11-20 08:27:40.566841] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.780 [2024-11-20 08:27:40.566848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.780 [2024-11-20 08:27:40.566854] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.780 [2024-11-20 08:27:40.566869] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.780 qpair failed and we were unable to recover it. 00:30:26.780 [2024-11-20 08:27:40.576829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.780 [2024-11-20 08:27:40.576894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.780 [2024-11-20 08:27:40.576908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.780 [2024-11-20 08:27:40.576915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.780 [2024-11-20 08:27:40.576922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.780 [2024-11-20 08:27:40.576937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.780 qpair failed and we were unable to recover it. 00:30:26.780 [2024-11-20 08:27:40.586836] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.780 [2024-11-20 08:27:40.586891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.780 [2024-11-20 08:27:40.586905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.780 [2024-11-20 08:27:40.586912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.780 [2024-11-20 08:27:40.586919] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.780 [2024-11-20 08:27:40.586933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.780 qpair failed and we were unable to recover it. 00:30:26.780 [2024-11-20 08:27:40.596871] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.780 [2024-11-20 08:27:40.596926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.780 [2024-11-20 08:27:40.596939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.780 [2024-11-20 08:27:40.596946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.780 [2024-11-20 08:27:40.596953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.780 [2024-11-20 08:27:40.596967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.780 qpair failed and we were unable to recover it. 00:30:26.780 [2024-11-20 08:27:40.606915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.780 [2024-11-20 08:27:40.606996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.780 [2024-11-20 08:27:40.607011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.780 [2024-11-20 08:27:40.607018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.780 [2024-11-20 08:27:40.607025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.780 [2024-11-20 08:27:40.607041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.780 qpair failed and we were unable to recover it. 00:30:26.780 [2024-11-20 08:27:40.616915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.780 [2024-11-20 08:27:40.616970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.780 [2024-11-20 08:27:40.616984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.780 [2024-11-20 08:27:40.616991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.780 [2024-11-20 08:27:40.616998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.780 [2024-11-20 08:27:40.617014] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.780 qpair failed and we were unable to recover it. 00:30:26.780 [2024-11-20 08:27:40.626973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.780 [2024-11-20 08:27:40.627026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.780 [2024-11-20 08:27:40.627043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.780 [2024-11-20 08:27:40.627051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.780 [2024-11-20 08:27:40.627057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.780 [2024-11-20 08:27:40.627073] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.780 qpair failed and we were unable to recover it. 00:30:26.780 [2024-11-20 08:27:40.636976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.780 [2024-11-20 08:27:40.637035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.780 [2024-11-20 08:27:40.637049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.780 [2024-11-20 08:27:40.637056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.780 [2024-11-20 08:27:40.637063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.780 [2024-11-20 08:27:40.637078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.780 qpair failed and we were unable to recover it. 00:30:26.780 [2024-11-20 08:27:40.647053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.780 [2024-11-20 08:27:40.647139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.780 [2024-11-20 08:27:40.647153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.780 [2024-11-20 08:27:40.647160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.780 [2024-11-20 08:27:40.647166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.780 [2024-11-20 08:27:40.647180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.780 qpair failed and we were unable to recover it. 00:30:26.780 [2024-11-20 08:27:40.657046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.780 [2024-11-20 08:27:40.657099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.780 [2024-11-20 08:27:40.657112] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.780 [2024-11-20 08:27:40.657119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.780 [2024-11-20 08:27:40.657126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.780 [2024-11-20 08:27:40.657140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.780 qpair failed and we were unable to recover it. 00:30:26.780 [2024-11-20 08:27:40.667059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.780 [2024-11-20 08:27:40.667159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.780 [2024-11-20 08:27:40.667173] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.780 [2024-11-20 08:27:40.667180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.780 [2024-11-20 08:27:40.667191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.780 [2024-11-20 08:27:40.667209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.780 qpair failed and we were unable to recover it. 00:30:26.780 [2024-11-20 08:27:40.677107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.780 [2024-11-20 08:27:40.677174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.780 [2024-11-20 08:27:40.677188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.780 [2024-11-20 08:27:40.677195] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.780 [2024-11-20 08:27:40.677205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.780 [2024-11-20 08:27:40.677221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.781 qpair failed and we were unable to recover it. 00:30:26.781 [2024-11-20 08:27:40.687180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.781 [2024-11-20 08:27:40.687240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.781 [2024-11-20 08:27:40.687254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.781 [2024-11-20 08:27:40.687261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.781 [2024-11-20 08:27:40.687268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.781 [2024-11-20 08:27:40.687282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.781 qpair failed and we were unable to recover it. 00:30:26.781 [2024-11-20 08:27:40.697179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.781 [2024-11-20 08:27:40.697244] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.781 [2024-11-20 08:27:40.697260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.781 [2024-11-20 08:27:40.697267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.781 [2024-11-20 08:27:40.697274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.781 [2024-11-20 08:27:40.697289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.781 qpair failed and we were unable to recover it. 00:30:26.781 [2024-11-20 08:27:40.707233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.781 [2024-11-20 08:27:40.707330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.781 [2024-11-20 08:27:40.707344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.781 [2024-11-20 08:27:40.707351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.781 [2024-11-20 08:27:40.707357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.781 [2024-11-20 08:27:40.707373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.781 qpair failed and we were unable to recover it. 00:30:26.781 [2024-11-20 08:27:40.717224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.781 [2024-11-20 08:27:40.717280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.781 [2024-11-20 08:27:40.717296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.781 [2024-11-20 08:27:40.717303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.781 [2024-11-20 08:27:40.717310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.781 [2024-11-20 08:27:40.717324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.781 qpair failed and we were unable to recover it. 00:30:26.781 [2024-11-20 08:27:40.727248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.781 [2024-11-20 08:27:40.727310] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.781 [2024-11-20 08:27:40.727324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.781 [2024-11-20 08:27:40.727331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.781 [2024-11-20 08:27:40.727337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.781 [2024-11-20 08:27:40.727352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.781 qpair failed and we were unable to recover it. 00:30:26.781 [2024-11-20 08:27:40.737286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.781 [2024-11-20 08:27:40.737344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.781 [2024-11-20 08:27:40.737358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.781 [2024-11-20 08:27:40.737365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.781 [2024-11-20 08:27:40.737371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.781 [2024-11-20 08:27:40.737386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.781 qpair failed and we were unable to recover it. 00:30:26.781 [2024-11-20 08:27:40.747301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.781 [2024-11-20 08:27:40.747358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.781 [2024-11-20 08:27:40.747373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.781 [2024-11-20 08:27:40.747380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.781 [2024-11-20 08:27:40.747386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.781 [2024-11-20 08:27:40.747401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.781 qpair failed and we were unable to recover it. 00:30:26.781 [2024-11-20 08:27:40.757382] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.781 [2024-11-20 08:27:40.757442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.781 [2024-11-20 08:27:40.757459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.781 [2024-11-20 08:27:40.757466] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.781 [2024-11-20 08:27:40.757472] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.781 [2024-11-20 08:27:40.757487] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.781 qpair failed and we were unable to recover it. 00:30:26.781 [2024-11-20 08:27:40.767373] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.781 [2024-11-20 08:27:40.767431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.781 [2024-11-20 08:27:40.767447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.781 [2024-11-20 08:27:40.767454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.781 [2024-11-20 08:27:40.767460] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.781 [2024-11-20 08:27:40.767475] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.781 qpair failed and we were unable to recover it. 00:30:26.781 [2024-11-20 08:27:40.777399] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.781 [2024-11-20 08:27:40.777457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.781 [2024-11-20 08:27:40.777472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.781 [2024-11-20 08:27:40.777480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.781 [2024-11-20 08:27:40.777486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.781 [2024-11-20 08:27:40.777501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.781 qpair failed and we were unable to recover it. 00:30:26.781 [2024-11-20 08:27:40.787459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.781 [2024-11-20 08:27:40.787513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.781 [2024-11-20 08:27:40.787527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.781 [2024-11-20 08:27:40.787534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.781 [2024-11-20 08:27:40.787540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.781 [2024-11-20 08:27:40.787554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.781 qpair failed and we were unable to recover it. 00:30:26.781 [2024-11-20 08:27:40.797454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:26.781 [2024-11-20 08:27:40.797512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:26.781 [2024-11-20 08:27:40.797526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:26.781 [2024-11-20 08:27:40.797533] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:26.781 [2024-11-20 08:27:40.797543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:26.781 [2024-11-20 08:27:40.797557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:26.781 qpair failed and we were unable to recover it. 00:30:27.041 [2024-11-20 08:27:40.807491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.041 [2024-11-20 08:27:40.807548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.041 [2024-11-20 08:27:40.807562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.041 [2024-11-20 08:27:40.807568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.041 [2024-11-20 08:27:40.807575] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.041 [2024-11-20 08:27:40.807590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.041 qpair failed and we were unable to recover it. 00:30:27.041 [2024-11-20 08:27:40.817525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.041 [2024-11-20 08:27:40.817581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.041 [2024-11-20 08:27:40.817595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.041 [2024-11-20 08:27:40.817602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.041 [2024-11-20 08:27:40.817609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.041 [2024-11-20 08:27:40.817623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.041 qpair failed and we were unable to recover it. 00:30:27.041 [2024-11-20 08:27:40.827543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.041 [2024-11-20 08:27:40.827595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.041 [2024-11-20 08:27:40.827608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.041 [2024-11-20 08:27:40.827615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.041 [2024-11-20 08:27:40.827622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.041 [2024-11-20 08:27:40.827637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.041 qpair failed and we were unable to recover it. 00:30:27.041 [2024-11-20 08:27:40.837606] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.041 [2024-11-20 08:27:40.837710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.041 [2024-11-20 08:27:40.837725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.041 [2024-11-20 08:27:40.837732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.041 [2024-11-20 08:27:40.837738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.041 [2024-11-20 08:27:40.837752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.041 qpair failed and we were unable to recover it. 00:30:27.041 [2024-11-20 08:27:40.847595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.041 [2024-11-20 08:27:40.847653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.041 [2024-11-20 08:27:40.847667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.041 [2024-11-20 08:27:40.847674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.041 [2024-11-20 08:27:40.847680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.041 [2024-11-20 08:27:40.847694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.041 qpair failed and we were unable to recover it. 00:30:27.041 [2024-11-20 08:27:40.857625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.041 [2024-11-20 08:27:40.857681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.041 [2024-11-20 08:27:40.857695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.041 [2024-11-20 08:27:40.857702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.041 [2024-11-20 08:27:40.857709] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.041 [2024-11-20 08:27:40.857724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.041 qpair failed and we were unable to recover it. 00:30:27.041 [2024-11-20 08:27:40.867640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.041 [2024-11-20 08:27:40.867695] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.041 [2024-11-20 08:27:40.867709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.041 [2024-11-20 08:27:40.867716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.041 [2024-11-20 08:27:40.867722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.041 [2024-11-20 08:27:40.867736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.041 qpair failed and we were unable to recover it. 00:30:27.041 [2024-11-20 08:27:40.877665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.041 [2024-11-20 08:27:40.877720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.041 [2024-11-20 08:27:40.877733] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.041 [2024-11-20 08:27:40.877740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.041 [2024-11-20 08:27:40.877748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.041 [2024-11-20 08:27:40.877763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.041 qpair failed and we were unable to recover it. 00:30:27.041 [2024-11-20 08:27:40.887727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.041 [2024-11-20 08:27:40.887782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.041 [2024-11-20 08:27:40.887799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.041 [2024-11-20 08:27:40.887806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.041 [2024-11-20 08:27:40.887813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.041 [2024-11-20 08:27:40.887828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.041 qpair failed and we were unable to recover it. 00:30:27.041 [2024-11-20 08:27:40.897738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.041 [2024-11-20 08:27:40.897800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.042 [2024-11-20 08:27:40.897813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.042 [2024-11-20 08:27:40.897820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.042 [2024-11-20 08:27:40.897826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.042 [2024-11-20 08:27:40.897840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.042 qpair failed and we were unable to recover it. 00:30:27.042 [2024-11-20 08:27:40.907756] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.042 [2024-11-20 08:27:40.907837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.042 [2024-11-20 08:27:40.907851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.042 [2024-11-20 08:27:40.907858] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.042 [2024-11-20 08:27:40.907864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.042 [2024-11-20 08:27:40.907879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.042 qpair failed and we were unable to recover it. 00:30:27.042 [2024-11-20 08:27:40.917805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.042 [2024-11-20 08:27:40.917862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.042 [2024-11-20 08:27:40.917875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.042 [2024-11-20 08:27:40.917882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.042 [2024-11-20 08:27:40.917888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.042 [2024-11-20 08:27:40.917903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.042 qpair failed and we were unable to recover it. 00:30:27.042 [2024-11-20 08:27:40.927819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.042 [2024-11-20 08:27:40.927873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.042 [2024-11-20 08:27:40.927887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.042 [2024-11-20 08:27:40.927894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.042 [2024-11-20 08:27:40.927904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.042 [2024-11-20 08:27:40.927918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.042 qpair failed and we were unable to recover it. 00:30:27.042 [2024-11-20 08:27:40.937855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.042 [2024-11-20 08:27:40.937912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.042 [2024-11-20 08:27:40.937926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.042 [2024-11-20 08:27:40.937933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.042 [2024-11-20 08:27:40.937939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.042 [2024-11-20 08:27:40.937953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.042 qpair failed and we were unable to recover it. 00:30:27.042 [2024-11-20 08:27:40.947887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.042 [2024-11-20 08:27:40.947941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.042 [2024-11-20 08:27:40.947954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.042 [2024-11-20 08:27:40.947961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.042 [2024-11-20 08:27:40.947967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.042 [2024-11-20 08:27:40.947981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.042 qpair failed and we were unable to recover it. 00:30:27.042 [2024-11-20 08:27:40.957849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.042 [2024-11-20 08:27:40.957936] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.042 [2024-11-20 08:27:40.957950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.042 [2024-11-20 08:27:40.957957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.042 [2024-11-20 08:27:40.957963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.042 [2024-11-20 08:27:40.957977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.042 qpair failed and we were unable to recover it. 00:30:27.042 [2024-11-20 08:27:40.967914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.042 [2024-11-20 08:27:40.967970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.042 [2024-11-20 08:27:40.967983] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.042 [2024-11-20 08:27:40.967990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.042 [2024-11-20 08:27:40.967996] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.042 [2024-11-20 08:27:40.968010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.042 qpair failed and we were unable to recover it. 00:30:27.042 [2024-11-20 08:27:40.977961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.042 [2024-11-20 08:27:40.978014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.042 [2024-11-20 08:27:40.978027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.042 [2024-11-20 08:27:40.978034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.042 [2024-11-20 08:27:40.978040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.042 [2024-11-20 08:27:40.978055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.042 qpair failed and we were unable to recover it. 00:30:27.042 [2024-11-20 08:27:40.987996] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.042 [2024-11-20 08:27:40.988052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.042 [2024-11-20 08:27:40.988066] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.042 [2024-11-20 08:27:40.988073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.042 [2024-11-20 08:27:40.988078] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.042 [2024-11-20 08:27:40.988092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.042 qpair failed and we were unable to recover it. 00:30:27.042 [2024-11-20 08:27:40.998053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.042 [2024-11-20 08:27:40.998112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.042 [2024-11-20 08:27:40.998125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.042 [2024-11-20 08:27:40.998133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.042 [2024-11-20 08:27:40.998139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.042 [2024-11-20 08:27:40.998153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.042 qpair failed and we were unable to recover it. 00:30:27.042 [2024-11-20 08:27:41.008080] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.042 [2024-11-20 08:27:41.008136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.042 [2024-11-20 08:27:41.008150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.042 [2024-11-20 08:27:41.008157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.042 [2024-11-20 08:27:41.008163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.042 [2024-11-20 08:27:41.008178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.042 qpair failed and we were unable to recover it. 00:30:27.042 [2024-11-20 08:27:41.018100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.042 [2024-11-20 08:27:41.018150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.042 [2024-11-20 08:27:41.018169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.042 [2024-11-20 08:27:41.018176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.042 [2024-11-20 08:27:41.018182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.042 [2024-11-20 08:27:41.018196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.042 qpair failed and we were unable to recover it. 00:30:27.042 [2024-11-20 08:27:41.028109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.042 [2024-11-20 08:27:41.028163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.043 [2024-11-20 08:27:41.028176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.043 [2024-11-20 08:27:41.028183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.043 [2024-11-20 08:27:41.028189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.043 [2024-11-20 08:27:41.028208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.043 qpair failed and we were unable to recover it. 00:30:27.043 [2024-11-20 08:27:41.038138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.043 [2024-11-20 08:27:41.038198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.043 [2024-11-20 08:27:41.038215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.043 [2024-11-20 08:27:41.038222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.043 [2024-11-20 08:27:41.038229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.043 [2024-11-20 08:27:41.038244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.043 qpair failed and we were unable to recover it. 00:30:27.043 [2024-11-20 08:27:41.048175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.043 [2024-11-20 08:27:41.048232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.043 [2024-11-20 08:27:41.048248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.043 [2024-11-20 08:27:41.048256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.043 [2024-11-20 08:27:41.048262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.043 [2024-11-20 08:27:41.048277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.043 qpair failed and we were unable to recover it. 00:30:27.043 [2024-11-20 08:27:41.058181] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.043 [2024-11-20 08:27:41.058237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.043 [2024-11-20 08:27:41.058251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.043 [2024-11-20 08:27:41.058258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.043 [2024-11-20 08:27:41.058268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.043 [2024-11-20 08:27:41.058282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.043 qpair failed and we were unable to recover it. 00:30:27.303 [2024-11-20 08:27:41.068240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.303 [2024-11-20 08:27:41.068305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.303 [2024-11-20 08:27:41.068319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.303 [2024-11-20 08:27:41.068326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.303 [2024-11-20 08:27:41.068333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.303 [2024-11-20 08:27:41.068347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.303 qpair failed and we were unable to recover it. 00:30:27.303 [2024-11-20 08:27:41.078263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.303 [2024-11-20 08:27:41.078321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.303 [2024-11-20 08:27:41.078335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.303 [2024-11-20 08:27:41.078342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.303 [2024-11-20 08:27:41.078348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.303 [2024-11-20 08:27:41.078363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.303 qpair failed and we were unable to recover it. 00:30:27.303 [2024-11-20 08:27:41.088285] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.303 [2024-11-20 08:27:41.088352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.303 [2024-11-20 08:27:41.088366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.303 [2024-11-20 08:27:41.088373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.303 [2024-11-20 08:27:41.088379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.303 [2024-11-20 08:27:41.088395] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.303 qpair failed and we were unable to recover it. 00:30:27.303 [2024-11-20 08:27:41.098244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.303 [2024-11-20 08:27:41.098302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.303 [2024-11-20 08:27:41.098315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.303 [2024-11-20 08:27:41.098322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.303 [2024-11-20 08:27:41.098329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.303 [2024-11-20 08:27:41.098343] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.303 qpair failed and we were unable to recover it. 00:30:27.303 [2024-11-20 08:27:41.108330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.303 [2024-11-20 08:27:41.108388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.303 [2024-11-20 08:27:41.108402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.303 [2024-11-20 08:27:41.108409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.303 [2024-11-20 08:27:41.108415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.303 [2024-11-20 08:27:41.108429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.303 qpair failed and we were unable to recover it. 00:30:27.303 [2024-11-20 08:27:41.118358] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.303 [2024-11-20 08:27:41.118417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.303 [2024-11-20 08:27:41.118431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.303 [2024-11-20 08:27:41.118438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.303 [2024-11-20 08:27:41.118445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.303 [2024-11-20 08:27:41.118459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.303 qpair failed and we were unable to recover it. 00:30:27.303 [2024-11-20 08:27:41.128410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.303 [2024-11-20 08:27:41.128468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.303 [2024-11-20 08:27:41.128483] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.303 [2024-11-20 08:27:41.128489] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.303 [2024-11-20 08:27:41.128496] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.303 [2024-11-20 08:27:41.128510] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.303 qpair failed and we were unable to recover it. 00:30:27.303 [2024-11-20 08:27:41.138429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.303 [2024-11-20 08:27:41.138485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.303 [2024-11-20 08:27:41.138499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.303 [2024-11-20 08:27:41.138506] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.303 [2024-11-20 08:27:41.138513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.303 [2024-11-20 08:27:41.138527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.303 qpair failed and we were unable to recover it. 00:30:27.303 [2024-11-20 08:27:41.148425] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.303 [2024-11-20 08:27:41.148485] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.303 [2024-11-20 08:27:41.148503] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.303 [2024-11-20 08:27:41.148510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.303 [2024-11-20 08:27:41.148516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.303 [2024-11-20 08:27:41.148531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.303 qpair failed and we were unable to recover it. 00:30:27.303 [2024-11-20 08:27:41.158437] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.303 [2024-11-20 08:27:41.158496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.303 [2024-11-20 08:27:41.158511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.303 [2024-11-20 08:27:41.158518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.303 [2024-11-20 08:27:41.158525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.303 [2024-11-20 08:27:41.158540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.303 qpair failed and we were unable to recover it. 00:30:27.303 [2024-11-20 08:27:41.168523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.303 [2024-11-20 08:27:41.168600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.303 [2024-11-20 08:27:41.168614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.303 [2024-11-20 08:27:41.168621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.303 [2024-11-20 08:27:41.168627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.303 [2024-11-20 08:27:41.168642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.303 qpair failed and we were unable to recover it. 00:30:27.303 [2024-11-20 08:27:41.178507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.303 [2024-11-20 08:27:41.178578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.303 [2024-11-20 08:27:41.178593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.303 [2024-11-20 08:27:41.178599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.304 [2024-11-20 08:27:41.178606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.304 [2024-11-20 08:27:41.178621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.304 qpair failed and we were unable to recover it. 00:30:27.304 [2024-11-20 08:27:41.188602] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.304 [2024-11-20 08:27:41.188658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.304 [2024-11-20 08:27:41.188671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.304 [2024-11-20 08:27:41.188678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.304 [2024-11-20 08:27:41.188688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.304 [2024-11-20 08:27:41.188703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.304 qpair failed and we were unable to recover it. 00:30:27.304 [2024-11-20 08:27:41.198559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.304 [2024-11-20 08:27:41.198619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.304 [2024-11-20 08:27:41.198633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.304 [2024-11-20 08:27:41.198641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.304 [2024-11-20 08:27:41.198647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.304 [2024-11-20 08:27:41.198661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.304 qpair failed and we were unable to recover it. 00:30:27.304 [2024-11-20 08:27:41.208650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.304 [2024-11-20 08:27:41.208707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.304 [2024-11-20 08:27:41.208722] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.304 [2024-11-20 08:27:41.208728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.304 [2024-11-20 08:27:41.208736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.304 [2024-11-20 08:27:41.208750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.304 qpair failed and we were unable to recover it. 00:30:27.304 [2024-11-20 08:27:41.218572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.304 [2024-11-20 08:27:41.218629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.304 [2024-11-20 08:27:41.218644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.304 [2024-11-20 08:27:41.218652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.304 [2024-11-20 08:27:41.218658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.304 [2024-11-20 08:27:41.218672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.304 qpair failed and we were unable to recover it. 00:30:27.304 [2024-11-20 08:27:41.228652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.304 [2024-11-20 08:27:41.228700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.304 [2024-11-20 08:27:41.228713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.304 [2024-11-20 08:27:41.228720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.304 [2024-11-20 08:27:41.228726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.304 [2024-11-20 08:27:41.228741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.304 qpair failed and we were unable to recover it. 00:30:27.304 [2024-11-20 08:27:41.238709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.304 [2024-11-20 08:27:41.238787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.304 [2024-11-20 08:27:41.238803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.304 [2024-11-20 08:27:41.238810] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.304 [2024-11-20 08:27:41.238816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.304 [2024-11-20 08:27:41.238831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.304 qpair failed and we were unable to recover it. 00:30:27.304 [2024-11-20 08:27:41.248747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.304 [2024-11-20 08:27:41.248803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.304 [2024-11-20 08:27:41.248817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.304 [2024-11-20 08:27:41.248824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.304 [2024-11-20 08:27:41.248830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.304 [2024-11-20 08:27:41.248844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.304 qpair failed and we were unable to recover it. 00:30:27.304 [2024-11-20 08:27:41.258704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.304 [2024-11-20 08:27:41.258755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.304 [2024-11-20 08:27:41.258770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.304 [2024-11-20 08:27:41.258776] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.304 [2024-11-20 08:27:41.258784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.304 [2024-11-20 08:27:41.258798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.304 qpair failed and we were unable to recover it. 00:30:27.304 [2024-11-20 08:27:41.268731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.304 [2024-11-20 08:27:41.268786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.304 [2024-11-20 08:27:41.268800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.304 [2024-11-20 08:27:41.268807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.304 [2024-11-20 08:27:41.268813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.304 [2024-11-20 08:27:41.268827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.304 qpair failed and we were unable to recover it. 00:30:27.304 [2024-11-20 08:27:41.278822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.304 [2024-11-20 08:27:41.278890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.304 [2024-11-20 08:27:41.278909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.304 [2024-11-20 08:27:41.278915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.304 [2024-11-20 08:27:41.278922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.304 [2024-11-20 08:27:41.278937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.304 qpair failed and we were unable to recover it. 00:30:27.304 [2024-11-20 08:27:41.288852] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.304 [2024-11-20 08:27:41.288911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.304 [2024-11-20 08:27:41.288924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.304 [2024-11-20 08:27:41.288931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.304 [2024-11-20 08:27:41.288938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.304 [2024-11-20 08:27:41.288952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.304 qpair failed and we were unable to recover it. 00:30:27.304 [2024-11-20 08:27:41.298891] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.304 [2024-11-20 08:27:41.298945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.304 [2024-11-20 08:27:41.298959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.304 [2024-11-20 08:27:41.298966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.304 [2024-11-20 08:27:41.298972] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.304 [2024-11-20 08:27:41.298987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.304 qpair failed and we were unable to recover it. 00:30:27.304 [2024-11-20 08:27:41.308903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.304 [2024-11-20 08:27:41.308953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.304 [2024-11-20 08:27:41.308967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.304 [2024-11-20 08:27:41.308974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.305 [2024-11-20 08:27:41.308980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.305 [2024-11-20 08:27:41.308996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.305 qpair failed and we were unable to recover it. 00:30:27.305 [2024-11-20 08:27:41.318977] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.305 [2024-11-20 08:27:41.319077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.305 [2024-11-20 08:27:41.319092] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.305 [2024-11-20 08:27:41.319098] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.305 [2024-11-20 08:27:41.319109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.305 [2024-11-20 08:27:41.319124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.305 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-20 08:27:41.329029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.565 [2024-11-20 08:27:41.329086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.565 [2024-11-20 08:27:41.329101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.565 [2024-11-20 08:27:41.329107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.565 [2024-11-20 08:27:41.329114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.565 [2024-11-20 08:27:41.329128] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-20 08:27:41.338948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.565 [2024-11-20 08:27:41.338998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.565 [2024-11-20 08:27:41.339011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.565 [2024-11-20 08:27:41.339019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.565 [2024-11-20 08:27:41.339025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.565 [2024-11-20 08:27:41.339040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-20 08:27:41.349026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.565 [2024-11-20 08:27:41.349080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.565 [2024-11-20 08:27:41.349094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.565 [2024-11-20 08:27:41.349100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.565 [2024-11-20 08:27:41.349107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.565 [2024-11-20 08:27:41.349121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-20 08:27:41.359136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.565 [2024-11-20 08:27:41.359206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.565 [2024-11-20 08:27:41.359223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.565 [2024-11-20 08:27:41.359231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.565 [2024-11-20 08:27:41.359237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.565 [2024-11-20 08:27:41.359252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-20 08:27:41.369048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.565 [2024-11-20 08:27:41.369111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.565 [2024-11-20 08:27:41.369125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.565 [2024-11-20 08:27:41.369132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.565 [2024-11-20 08:27:41.369139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.565 [2024-11-20 08:27:41.369155] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-20 08:27:41.379076] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.565 [2024-11-20 08:27:41.379130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.565 [2024-11-20 08:27:41.379145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.565 [2024-11-20 08:27:41.379152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.565 [2024-11-20 08:27:41.379158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.565 [2024-11-20 08:27:41.379174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-20 08:27:41.389213] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.565 [2024-11-20 08:27:41.389295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.565 [2024-11-20 08:27:41.389309] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.565 [2024-11-20 08:27:41.389317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.565 [2024-11-20 08:27:41.389323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.565 [2024-11-20 08:27:41.389338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-20 08:27:41.399175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.565 [2024-11-20 08:27:41.399239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.565 [2024-11-20 08:27:41.399253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.565 [2024-11-20 08:27:41.399261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.565 [2024-11-20 08:27:41.399267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.565 [2024-11-20 08:27:41.399282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-20 08:27:41.409257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.565 [2024-11-20 08:27:41.409312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.565 [2024-11-20 08:27:41.409329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.565 [2024-11-20 08:27:41.409336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.565 [2024-11-20 08:27:41.409342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.565 [2024-11-20 08:27:41.409357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-20 08:27:41.419227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.565 [2024-11-20 08:27:41.419283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.565 [2024-11-20 08:27:41.419299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.565 [2024-11-20 08:27:41.419306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.565 [2024-11-20 08:27:41.419312] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.565 [2024-11-20 08:27:41.419327] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.565 qpair failed and we were unable to recover it. 00:30:27.565 [2024-11-20 08:27:41.429237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.565 [2024-11-20 08:27:41.429293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.566 [2024-11-20 08:27:41.429307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.566 [2024-11-20 08:27:41.429314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.566 [2024-11-20 08:27:41.429320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.566 [2024-11-20 08:27:41.429334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-20 08:27:41.439223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.566 [2024-11-20 08:27:41.439305] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.566 [2024-11-20 08:27:41.439319] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.566 [2024-11-20 08:27:41.439326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.566 [2024-11-20 08:27:41.439332] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.566 [2024-11-20 08:27:41.439346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-20 08:27:41.449279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.566 [2024-11-20 08:27:41.449331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.566 [2024-11-20 08:27:41.449345] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.566 [2024-11-20 08:27:41.449352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.566 [2024-11-20 08:27:41.449361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.566 [2024-11-20 08:27:41.449375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-20 08:27:41.459280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.566 [2024-11-20 08:27:41.459333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.566 [2024-11-20 08:27:41.459349] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.566 [2024-11-20 08:27:41.459356] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.566 [2024-11-20 08:27:41.459362] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.566 [2024-11-20 08:27:41.459377] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-20 08:27:41.469306] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.566 [2024-11-20 08:27:41.469362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.566 [2024-11-20 08:27:41.469376] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.566 [2024-11-20 08:27:41.469383] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.566 [2024-11-20 08:27:41.469389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.566 [2024-11-20 08:27:41.469403] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-20 08:27:41.479391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.566 [2024-11-20 08:27:41.479468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.566 [2024-11-20 08:27:41.479482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.566 [2024-11-20 08:27:41.479488] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.566 [2024-11-20 08:27:41.479494] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.566 [2024-11-20 08:27:41.479508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-20 08:27:41.489366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.566 [2024-11-20 08:27:41.489426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.566 [2024-11-20 08:27:41.489439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.566 [2024-11-20 08:27:41.489445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.566 [2024-11-20 08:27:41.489451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.566 [2024-11-20 08:27:41.489464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-20 08:27:41.499439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.566 [2024-11-20 08:27:41.499492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.566 [2024-11-20 08:27:41.499506] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.566 [2024-11-20 08:27:41.499512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.566 [2024-11-20 08:27:41.499518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.566 [2024-11-20 08:27:41.499532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-20 08:27:41.509409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.566 [2024-11-20 08:27:41.509463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.566 [2024-11-20 08:27:41.509477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.566 [2024-11-20 08:27:41.509483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.566 [2024-11-20 08:27:41.509489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.566 [2024-11-20 08:27:41.509503] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-20 08:27:41.519534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.566 [2024-11-20 08:27:41.519586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.566 [2024-11-20 08:27:41.519600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.566 [2024-11-20 08:27:41.519606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.566 [2024-11-20 08:27:41.519612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.566 [2024-11-20 08:27:41.519626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-20 08:27:41.529533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.566 [2024-11-20 08:27:41.529588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.566 [2024-11-20 08:27:41.529601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.566 [2024-11-20 08:27:41.529607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.566 [2024-11-20 08:27:41.529613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.566 [2024-11-20 08:27:41.529626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-20 08:27:41.539565] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.566 [2024-11-20 08:27:41.539617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.566 [2024-11-20 08:27:41.539634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.566 [2024-11-20 08:27:41.539640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.566 [2024-11-20 08:27:41.539646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.566 [2024-11-20 08:27:41.539660] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-20 08:27:41.549596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.566 [2024-11-20 08:27:41.549664] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.566 [2024-11-20 08:27:41.549680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.566 [2024-11-20 08:27:41.549686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.566 [2024-11-20 08:27:41.549692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.566 [2024-11-20 08:27:41.549708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.566 qpair failed and we were unable to recover it. 00:30:27.566 [2024-11-20 08:27:41.559653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.566 [2024-11-20 08:27:41.559731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.566 [2024-11-20 08:27:41.559745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.567 [2024-11-20 08:27:41.559751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.567 [2024-11-20 08:27:41.559757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.567 [2024-11-20 08:27:41.559772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-20 08:27:41.569646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.567 [2024-11-20 08:27:41.569705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.567 [2024-11-20 08:27:41.569718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.567 [2024-11-20 08:27:41.569725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.567 [2024-11-20 08:27:41.569732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.567 [2024-11-20 08:27:41.569746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.567 [2024-11-20 08:27:41.579684] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.567 [2024-11-20 08:27:41.579767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.567 [2024-11-20 08:27:41.579781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.567 [2024-11-20 08:27:41.579788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.567 [2024-11-20 08:27:41.579797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.567 [2024-11-20 08:27:41.579812] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.567 qpair failed and we were unable to recover it. 00:30:27.827 [2024-11-20 08:27:41.589739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.827 [2024-11-20 08:27:41.589797] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.827 [2024-11-20 08:27:41.589810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.827 [2024-11-20 08:27:41.589817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.827 [2024-11-20 08:27:41.589823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.827 [2024-11-20 08:27:41.589837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.827 qpair failed and we were unable to recover it. 00:30:27.827 [2024-11-20 08:27:41.599737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.827 [2024-11-20 08:27:41.599791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.827 [2024-11-20 08:27:41.599805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.827 [2024-11-20 08:27:41.599811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.827 [2024-11-20 08:27:41.599817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.827 [2024-11-20 08:27:41.599831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.827 qpair failed and we were unable to recover it. 00:30:27.827 [2024-11-20 08:27:41.609792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.827 [2024-11-20 08:27:41.609850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.827 [2024-11-20 08:27:41.609863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.827 [2024-11-20 08:27:41.609870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.827 [2024-11-20 08:27:41.609875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.827 [2024-11-20 08:27:41.609889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.827 qpair failed and we were unable to recover it. 00:30:27.827 [2024-11-20 08:27:41.619781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.827 [2024-11-20 08:27:41.619859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.827 [2024-11-20 08:27:41.619873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.827 [2024-11-20 08:27:41.619880] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.827 [2024-11-20 08:27:41.619886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.827 [2024-11-20 08:27:41.619900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.827 qpair failed and we were unable to recover it. 00:30:27.827 [2024-11-20 08:27:41.629831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.827 [2024-11-20 08:27:41.629888] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.827 [2024-11-20 08:27:41.629902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.827 [2024-11-20 08:27:41.629909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.827 [2024-11-20 08:27:41.629914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.827 [2024-11-20 08:27:41.629929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.827 qpair failed and we were unable to recover it. 00:30:27.827 [2024-11-20 08:27:41.639763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.827 [2024-11-20 08:27:41.639829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.827 [2024-11-20 08:27:41.639843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.827 [2024-11-20 08:27:41.639849] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.827 [2024-11-20 08:27:41.639855] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.827 [2024-11-20 08:27:41.639870] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.827 qpair failed and we were unable to recover it. 00:30:27.827 [2024-11-20 08:27:41.649872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.828 [2024-11-20 08:27:41.649927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.828 [2024-11-20 08:27:41.649940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.828 [2024-11-20 08:27:41.649946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.828 [2024-11-20 08:27:41.649952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.828 [2024-11-20 08:27:41.649966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.828 qpair failed and we were unable to recover it. 00:30:27.828 [2024-11-20 08:27:41.659890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.828 [2024-11-20 08:27:41.659944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.828 [2024-11-20 08:27:41.659958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.828 [2024-11-20 08:27:41.659965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.828 [2024-11-20 08:27:41.659971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.828 [2024-11-20 08:27:41.659985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.828 qpair failed and we were unable to recover it. 00:30:27.828 [2024-11-20 08:27:41.669961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.828 [2024-11-20 08:27:41.670045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.828 [2024-11-20 08:27:41.670064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.828 [2024-11-20 08:27:41.670071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.828 [2024-11-20 08:27:41.670076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.828 [2024-11-20 08:27:41.670090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.828 qpair failed and we were unable to recover it. 00:30:27.828 [2024-11-20 08:27:41.679961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.828 [2024-11-20 08:27:41.680014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.828 [2024-11-20 08:27:41.680028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.828 [2024-11-20 08:27:41.680034] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.828 [2024-11-20 08:27:41.680040] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.828 [2024-11-20 08:27:41.680054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.828 qpair failed and we were unable to recover it. 00:30:27.828 [2024-11-20 08:27:41.689955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.828 [2024-11-20 08:27:41.690008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.828 [2024-11-20 08:27:41.690022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.828 [2024-11-20 08:27:41.690028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.828 [2024-11-20 08:27:41.690034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.828 [2024-11-20 08:27:41.690048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.828 qpair failed and we were unable to recover it. 00:30:27.828 [2024-11-20 08:27:41.700037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.828 [2024-11-20 08:27:41.700089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.828 [2024-11-20 08:27:41.700103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.828 [2024-11-20 08:27:41.700109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.828 [2024-11-20 08:27:41.700115] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.828 [2024-11-20 08:27:41.700130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.828 qpair failed and we were unable to recover it. 00:30:27.828 [2024-11-20 08:27:41.710078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.828 [2024-11-20 08:27:41.710134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.828 [2024-11-20 08:27:41.710149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.828 [2024-11-20 08:27:41.710156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.828 [2024-11-20 08:27:41.710164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.828 [2024-11-20 08:27:41.710179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.828 qpair failed and we were unable to recover it. 00:30:27.828 [2024-11-20 08:27:41.720105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.828 [2024-11-20 08:27:41.720162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.828 [2024-11-20 08:27:41.720176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.828 [2024-11-20 08:27:41.720183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.828 [2024-11-20 08:27:41.720189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.828 [2024-11-20 08:27:41.720209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.828 qpair failed and we were unable to recover it. 00:30:27.828 [2024-11-20 08:27:41.730107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.828 [2024-11-20 08:27:41.730159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.828 [2024-11-20 08:27:41.730172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.828 [2024-11-20 08:27:41.730178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.828 [2024-11-20 08:27:41.730184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.828 [2024-11-20 08:27:41.730198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.828 qpair failed and we were unable to recover it. 00:30:27.828 [2024-11-20 08:27:41.740131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.828 [2024-11-20 08:27:41.740183] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.828 [2024-11-20 08:27:41.740196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.828 [2024-11-20 08:27:41.740207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.828 [2024-11-20 08:27:41.740212] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.828 [2024-11-20 08:27:41.740227] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.828 qpair failed and we were unable to recover it. 00:30:27.828 [2024-11-20 08:27:41.750150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.828 [2024-11-20 08:27:41.750220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.828 [2024-11-20 08:27:41.750234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.828 [2024-11-20 08:27:41.750241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.828 [2024-11-20 08:27:41.750246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.828 [2024-11-20 08:27:41.750260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.828 qpair failed and we were unable to recover it. 00:30:27.828 [2024-11-20 08:27:41.760260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.828 [2024-11-20 08:27:41.760316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.828 [2024-11-20 08:27:41.760330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.828 [2024-11-20 08:27:41.760336] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.828 [2024-11-20 08:27:41.760342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.828 [2024-11-20 08:27:41.760356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.828 qpair failed and we were unable to recover it. 00:30:27.828 [2024-11-20 08:27:41.770221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.828 [2024-11-20 08:27:41.770283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.828 [2024-11-20 08:27:41.770297] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.828 [2024-11-20 08:27:41.770304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.828 [2024-11-20 08:27:41.770310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.828 [2024-11-20 08:27:41.770324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.828 qpair failed and we were unable to recover it. 00:30:27.828 [2024-11-20 08:27:41.780292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.829 [2024-11-20 08:27:41.780352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.829 [2024-11-20 08:27:41.780366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.829 [2024-11-20 08:27:41.780372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.829 [2024-11-20 08:27:41.780378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.829 [2024-11-20 08:27:41.780393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.829 qpair failed and we were unable to recover it. 00:30:27.829 [2024-11-20 08:27:41.790277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.829 [2024-11-20 08:27:41.790330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.829 [2024-11-20 08:27:41.790343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.829 [2024-11-20 08:27:41.790349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.829 [2024-11-20 08:27:41.790355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.829 [2024-11-20 08:27:41.790369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.829 qpair failed and we were unable to recover it. 00:30:27.829 [2024-11-20 08:27:41.800300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.829 [2024-11-20 08:27:41.800356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.829 [2024-11-20 08:27:41.800372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.829 [2024-11-20 08:27:41.800378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.829 [2024-11-20 08:27:41.800384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.829 [2024-11-20 08:27:41.800399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.829 qpair failed and we were unable to recover it. 00:30:27.829 [2024-11-20 08:27:41.810328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.829 [2024-11-20 08:27:41.810385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.829 [2024-11-20 08:27:41.810399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.829 [2024-11-20 08:27:41.810405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.829 [2024-11-20 08:27:41.810411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.829 [2024-11-20 08:27:41.810425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.829 qpair failed and we were unable to recover it. 00:30:27.829 [2024-11-20 08:27:41.820279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.829 [2024-11-20 08:27:41.820330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.829 [2024-11-20 08:27:41.820343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.829 [2024-11-20 08:27:41.820349] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.829 [2024-11-20 08:27:41.820355] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.829 [2024-11-20 08:27:41.820369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.829 qpair failed and we were unable to recover it. 00:30:27.829 [2024-11-20 08:27:41.830374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.829 [2024-11-20 08:27:41.830424] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.829 [2024-11-20 08:27:41.830437] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.829 [2024-11-20 08:27:41.830444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.829 [2024-11-20 08:27:41.830450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.829 [2024-11-20 08:27:41.830464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.829 qpair failed and we were unable to recover it. 00:30:27.829 [2024-11-20 08:27:41.840454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:27.829 [2024-11-20 08:27:41.840509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:27.829 [2024-11-20 08:27:41.840523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:27.829 [2024-11-20 08:27:41.840529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:27.829 [2024-11-20 08:27:41.840538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:27.829 [2024-11-20 08:27:41.840552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:27.829 qpair failed and we were unable to recover it. 00:30:28.095 [2024-11-20 08:27:41.850435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.095 [2024-11-20 08:27:41.850512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.095 [2024-11-20 08:27:41.850526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.095 [2024-11-20 08:27:41.850532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.095 [2024-11-20 08:27:41.850538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.095 [2024-11-20 08:27:41.850552] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.095 qpair failed and we were unable to recover it. 00:30:28.095 [2024-11-20 08:27:41.860504] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.095 [2024-11-20 08:27:41.860558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.095 [2024-11-20 08:27:41.860571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.095 [2024-11-20 08:27:41.860577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.095 [2024-11-20 08:27:41.860583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.095 [2024-11-20 08:27:41.860596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.095 qpair failed and we were unable to recover it. 00:30:28.095 [2024-11-20 08:27:41.870492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.095 [2024-11-20 08:27:41.870548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.095 [2024-11-20 08:27:41.870561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.095 [2024-11-20 08:27:41.870567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.095 [2024-11-20 08:27:41.870573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.095 [2024-11-20 08:27:41.870586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.095 qpair failed and we were unable to recover it. 00:30:28.095 [2024-11-20 08:27:41.880548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.095 [2024-11-20 08:27:41.880604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.095 [2024-11-20 08:27:41.880617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.095 [2024-11-20 08:27:41.880623] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.095 [2024-11-20 08:27:41.880629] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.095 [2024-11-20 08:27:41.880643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.095 qpair failed and we were unable to recover it. 00:30:28.095 [2024-11-20 08:27:41.890546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.095 [2024-11-20 08:27:41.890605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.095 [2024-11-20 08:27:41.890618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.095 [2024-11-20 08:27:41.890625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.095 [2024-11-20 08:27:41.890630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.095 [2024-11-20 08:27:41.890644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.095 qpair failed and we were unable to recover it. 00:30:28.095 [2024-11-20 08:27:41.900593] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.095 [2024-11-20 08:27:41.900650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.095 [2024-11-20 08:27:41.900663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.095 [2024-11-20 08:27:41.900669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.095 [2024-11-20 08:27:41.900675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.095 [2024-11-20 08:27:41.900689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.095 qpair failed and we were unable to recover it. 00:30:28.095 [2024-11-20 08:27:41.910620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.095 [2024-11-20 08:27:41.910726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.095 [2024-11-20 08:27:41.910739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.095 [2024-11-20 08:27:41.910746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.095 [2024-11-20 08:27:41.910752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.095 [2024-11-20 08:27:41.910765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.095 qpair failed and we were unable to recover it. 00:30:28.095 [2024-11-20 08:27:41.920645] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.095 [2024-11-20 08:27:41.920704] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.095 [2024-11-20 08:27:41.920717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.095 [2024-11-20 08:27:41.920723] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.095 [2024-11-20 08:27:41.920729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.095 [2024-11-20 08:27:41.920744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.095 qpair failed and we were unable to recover it. 00:30:28.095 [2024-11-20 08:27:41.930663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.095 [2024-11-20 08:27:41.930717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.095 [2024-11-20 08:27:41.930732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.095 [2024-11-20 08:27:41.930739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.095 [2024-11-20 08:27:41.930744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.095 [2024-11-20 08:27:41.930759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.095 qpair failed and we were unable to recover it. 00:30:28.095 [2024-11-20 08:27:41.940686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.095 [2024-11-20 08:27:41.940737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.095 [2024-11-20 08:27:41.940750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.095 [2024-11-20 08:27:41.940756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.095 [2024-11-20 08:27:41.940762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.095 [2024-11-20 08:27:41.940776] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.095 qpair failed and we were unable to recover it. 00:30:28.095 [2024-11-20 08:27:41.950705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.095 [2024-11-20 08:27:41.950755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.095 [2024-11-20 08:27:41.950768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.095 [2024-11-20 08:27:41.950775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.095 [2024-11-20 08:27:41.950781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.095 [2024-11-20 08:27:41.950794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.095 qpair failed and we were unable to recover it. 00:30:28.095 [2024-11-20 08:27:41.960750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.095 [2024-11-20 08:27:41.960805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.096 [2024-11-20 08:27:41.960818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.096 [2024-11-20 08:27:41.960824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.096 [2024-11-20 08:27:41.960830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.096 [2024-11-20 08:27:41.960845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.096 qpair failed and we were unable to recover it. 00:30:28.096 [2024-11-20 08:27:41.970804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.096 [2024-11-20 08:27:41.970859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.096 [2024-11-20 08:27:41.970873] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.096 [2024-11-20 08:27:41.970882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.096 [2024-11-20 08:27:41.970888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.096 [2024-11-20 08:27:41.970901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.096 qpair failed and we were unable to recover it. 00:30:28.096 [2024-11-20 08:27:41.980792] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.096 [2024-11-20 08:27:41.980858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.096 [2024-11-20 08:27:41.980872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.096 [2024-11-20 08:27:41.980878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.096 [2024-11-20 08:27:41.980883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.096 [2024-11-20 08:27:41.980897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.096 qpair failed and we were unable to recover it. 00:30:28.096 [2024-11-20 08:27:41.990831] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.096 [2024-11-20 08:27:41.990882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.096 [2024-11-20 08:27:41.990896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.096 [2024-11-20 08:27:41.990902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.096 [2024-11-20 08:27:41.990908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.096 [2024-11-20 08:27:41.990921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.096 qpair failed and we were unable to recover it. 00:30:28.096 [2024-11-20 08:27:42.000919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.096 [2024-11-20 08:27:42.000973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.096 [2024-11-20 08:27:42.000986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.096 [2024-11-20 08:27:42.000993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.096 [2024-11-20 08:27:42.000999] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.096 [2024-11-20 08:27:42.001012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.096 qpair failed and we were unable to recover it. 00:30:28.096 [2024-11-20 08:27:42.010829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.096 [2024-11-20 08:27:42.010924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.096 [2024-11-20 08:27:42.010937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.096 [2024-11-20 08:27:42.010943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.096 [2024-11-20 08:27:42.010949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.096 [2024-11-20 08:27:42.010963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.096 qpair failed and we were unable to recover it. 00:30:28.096 [2024-11-20 08:27:42.020932] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.096 [2024-11-20 08:27:42.020999] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.096 [2024-11-20 08:27:42.021013] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.096 [2024-11-20 08:27:42.021019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.096 [2024-11-20 08:27:42.021025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.096 [2024-11-20 08:27:42.021039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.096 qpair failed and we were unable to recover it. 00:30:28.096 [2024-11-20 08:27:42.030993] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.096 [2024-11-20 08:27:42.031047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.096 [2024-11-20 08:27:42.031060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.096 [2024-11-20 08:27:42.031067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.096 [2024-11-20 08:27:42.031072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.096 [2024-11-20 08:27:42.031086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.096 qpair failed and we were unable to recover it. 00:30:28.096 [2024-11-20 08:27:42.040952] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.096 [2024-11-20 08:27:42.041010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.096 [2024-11-20 08:27:42.041024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.096 [2024-11-20 08:27:42.041030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.096 [2024-11-20 08:27:42.041036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.096 [2024-11-20 08:27:42.041050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.096 qpair failed and we were unable to recover it. 00:30:28.096 [2024-11-20 08:27:42.051015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.096 [2024-11-20 08:27:42.051073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.096 [2024-11-20 08:27:42.051087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.096 [2024-11-20 08:27:42.051093] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.096 [2024-11-20 08:27:42.051099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.096 [2024-11-20 08:27:42.051113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.096 qpair failed and we were unable to recover it. 00:30:28.096 [2024-11-20 08:27:42.060999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.096 [2024-11-20 08:27:42.061068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.096 [2024-11-20 08:27:42.061085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.096 [2024-11-20 08:27:42.061091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.096 [2024-11-20 08:27:42.061097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.096 [2024-11-20 08:27:42.061111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.096 qpair failed and we were unable to recover it. 00:30:28.096 [2024-11-20 08:27:42.071053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.096 [2024-11-20 08:27:42.071145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.096 [2024-11-20 08:27:42.071158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.096 [2024-11-20 08:27:42.071165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.096 [2024-11-20 08:27:42.071171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.096 [2024-11-20 08:27:42.071184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.096 qpair failed and we were unable to recover it. 00:30:28.096 [2024-11-20 08:27:42.081092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.096 [2024-11-20 08:27:42.081146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.096 [2024-11-20 08:27:42.081160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.096 [2024-11-20 08:27:42.081167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.096 [2024-11-20 08:27:42.081172] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.096 [2024-11-20 08:27:42.081186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.096 qpair failed and we were unable to recover it. 00:30:28.096 [2024-11-20 08:27:42.091121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.096 [2024-11-20 08:27:42.091170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.096 [2024-11-20 08:27:42.091184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.097 [2024-11-20 08:27:42.091190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.097 [2024-11-20 08:27:42.091196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.097 [2024-11-20 08:27:42.091213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.097 qpair failed and we were unable to recover it. 00:30:28.097 [2024-11-20 08:27:42.101161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.097 [2024-11-20 08:27:42.101218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.097 [2024-11-20 08:27:42.101232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.097 [2024-11-20 08:27:42.101241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.097 [2024-11-20 08:27:42.101247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.097 [2024-11-20 08:27:42.101261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.097 qpair failed and we were unable to recover it. 00:30:28.097 [2024-11-20 08:27:42.111106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.097 [2024-11-20 08:27:42.111154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.097 [2024-11-20 08:27:42.111169] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.097 [2024-11-20 08:27:42.111176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.097 [2024-11-20 08:27:42.111182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.097 [2024-11-20 08:27:42.111196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.097 qpair failed and we were unable to recover it. 00:30:28.357 [2024-11-20 08:27:42.121215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.357 [2024-11-20 08:27:42.121268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.357 [2024-11-20 08:27:42.121282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.357 [2024-11-20 08:27:42.121289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.357 [2024-11-20 08:27:42.121295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.357 [2024-11-20 08:27:42.121310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.357 qpair failed and we were unable to recover it. 00:30:28.357 [2024-11-20 08:27:42.131164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.357 [2024-11-20 08:27:42.131220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.357 [2024-11-20 08:27:42.131234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.357 [2024-11-20 08:27:42.131241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.357 [2024-11-20 08:27:42.131247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.357 [2024-11-20 08:27:42.131260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.357 qpair failed and we were unable to recover it. 00:30:28.357 [2024-11-20 08:27:42.141320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.357 [2024-11-20 08:27:42.141421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.357 [2024-11-20 08:27:42.141434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.357 [2024-11-20 08:27:42.141441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.357 [2024-11-20 08:27:42.141447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.357 [2024-11-20 08:27:42.141462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.357 qpair failed and we were unable to recover it. 00:30:28.357 [2024-11-20 08:27:42.151290] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.357 [2024-11-20 08:27:42.151345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.357 [2024-11-20 08:27:42.151358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.357 [2024-11-20 08:27:42.151365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.357 [2024-11-20 08:27:42.151370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.357 [2024-11-20 08:27:42.151384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.357 qpair failed and we were unable to recover it. 00:30:28.357 [2024-11-20 08:27:42.161318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.357 [2024-11-20 08:27:42.161373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.357 [2024-11-20 08:27:42.161386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.357 [2024-11-20 08:27:42.161392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.357 [2024-11-20 08:27:42.161398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.357 [2024-11-20 08:27:42.161411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.357 qpair failed and we were unable to recover it. 00:30:28.357 [2024-11-20 08:27:42.171380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.357 [2024-11-20 08:27:42.171431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.357 [2024-11-20 08:27:42.171445] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.357 [2024-11-20 08:27:42.171451] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.357 [2024-11-20 08:27:42.171457] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.357 [2024-11-20 08:27:42.171471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.357 qpair failed and we were unable to recover it. 00:30:28.358 [2024-11-20 08:27:42.181372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.358 [2024-11-20 08:27:42.181422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.358 [2024-11-20 08:27:42.181435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.358 [2024-11-20 08:27:42.181441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.358 [2024-11-20 08:27:42.181447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.358 [2024-11-20 08:27:42.181461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.358 qpair failed and we were unable to recover it. 00:30:28.358 [2024-11-20 08:27:42.191402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.358 [2024-11-20 08:27:42.191455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.358 [2024-11-20 08:27:42.191473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.358 [2024-11-20 08:27:42.191480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.358 [2024-11-20 08:27:42.191486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.358 [2024-11-20 08:27:42.191500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.358 qpair failed and we were unable to recover it. 00:30:28.358 [2024-11-20 08:27:42.201356] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.358 [2024-11-20 08:27:42.201416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.358 [2024-11-20 08:27:42.201430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.358 [2024-11-20 08:27:42.201436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.358 [2024-11-20 08:27:42.201442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.358 [2024-11-20 08:27:42.201456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.358 qpair failed and we were unable to recover it. 00:30:28.358 [2024-11-20 08:27:42.211461] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.358 [2024-11-20 08:27:42.211534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.358 [2024-11-20 08:27:42.211547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.358 [2024-11-20 08:27:42.211554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.358 [2024-11-20 08:27:42.211559] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.358 [2024-11-20 08:27:42.211573] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.358 qpair failed and we were unable to recover it. 00:30:28.358 [2024-11-20 08:27:42.221485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.358 [2024-11-20 08:27:42.221538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.358 [2024-11-20 08:27:42.221551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.358 [2024-11-20 08:27:42.221557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.358 [2024-11-20 08:27:42.221563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.358 [2024-11-20 08:27:42.221577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.358 qpair failed and we were unable to recover it. 00:30:28.358 [2024-11-20 08:27:42.231516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.358 [2024-11-20 08:27:42.231568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.358 [2024-11-20 08:27:42.231581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.358 [2024-11-20 08:27:42.231590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.358 [2024-11-20 08:27:42.231596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.358 [2024-11-20 08:27:42.231610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.358 qpair failed and we were unable to recover it. 00:30:28.358 [2024-11-20 08:27:42.241542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.358 [2024-11-20 08:27:42.241598] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.358 [2024-11-20 08:27:42.241612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.358 [2024-11-20 08:27:42.241619] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.358 [2024-11-20 08:27:42.241624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.358 [2024-11-20 08:27:42.241639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.358 qpair failed and we were unable to recover it. 00:30:28.358 [2024-11-20 08:27:42.251578] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.358 [2024-11-20 08:27:42.251662] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.358 [2024-11-20 08:27:42.251675] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.358 [2024-11-20 08:27:42.251681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.358 [2024-11-20 08:27:42.251687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.358 [2024-11-20 08:27:42.251701] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.358 qpair failed and we were unable to recover it. 00:30:28.358 [2024-11-20 08:27:42.261579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.358 [2024-11-20 08:27:42.261626] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.358 [2024-11-20 08:27:42.261639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.358 [2024-11-20 08:27:42.261645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.358 [2024-11-20 08:27:42.261651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.358 [2024-11-20 08:27:42.261665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.358 qpair failed and we were unable to recover it. 00:30:28.358 [2024-11-20 08:27:42.271638] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.358 [2024-11-20 08:27:42.271691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.358 [2024-11-20 08:27:42.271704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.358 [2024-11-20 08:27:42.271711] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.358 [2024-11-20 08:27:42.271716] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.358 [2024-11-20 08:27:42.271731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.358 qpair failed and we were unable to recover it. 00:30:28.358 [2024-11-20 08:27:42.281648] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.358 [2024-11-20 08:27:42.281703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.358 [2024-11-20 08:27:42.281717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.358 [2024-11-20 08:27:42.281724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.358 [2024-11-20 08:27:42.281729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.358 [2024-11-20 08:27:42.281743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.358 qpair failed and we were unable to recover it. 00:30:28.358 [2024-11-20 08:27:42.291675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.358 [2024-11-20 08:27:42.291767] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.358 [2024-11-20 08:27:42.291781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.358 [2024-11-20 08:27:42.291788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.358 [2024-11-20 08:27:42.291794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.358 [2024-11-20 08:27:42.291807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.358 qpair failed and we were unable to recover it. 00:30:28.358 [2024-11-20 08:27:42.301703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.358 [2024-11-20 08:27:42.301757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.358 [2024-11-20 08:27:42.301770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.358 [2024-11-20 08:27:42.301777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.358 [2024-11-20 08:27:42.301783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.358 [2024-11-20 08:27:42.301797] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.358 qpair failed and we were unable to recover it. 00:30:28.359 [2024-11-20 08:27:42.311734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.359 [2024-11-20 08:27:42.311788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.359 [2024-11-20 08:27:42.311802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.359 [2024-11-20 08:27:42.311808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.359 [2024-11-20 08:27:42.311815] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.359 [2024-11-20 08:27:42.311829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.359 qpair failed and we were unable to recover it. 00:30:28.359 [2024-11-20 08:27:42.321706] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.359 [2024-11-20 08:27:42.321794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.359 [2024-11-20 08:27:42.321813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.359 [2024-11-20 08:27:42.321820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.359 [2024-11-20 08:27:42.321825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.359 [2024-11-20 08:27:42.321839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.359 qpair failed and we were unable to recover it. 00:30:28.359 [2024-11-20 08:27:42.331830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.359 [2024-11-20 08:27:42.331894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.359 [2024-11-20 08:27:42.331908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.359 [2024-11-20 08:27:42.331915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.359 [2024-11-20 08:27:42.331922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.359 [2024-11-20 08:27:42.331936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.359 qpair failed and we were unable to recover it. 00:30:28.359 [2024-11-20 08:27:42.341815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.359 [2024-11-20 08:27:42.341887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.359 [2024-11-20 08:27:42.341902] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.359 [2024-11-20 08:27:42.341908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.359 [2024-11-20 08:27:42.341914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.359 [2024-11-20 08:27:42.341929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.359 qpair failed and we were unable to recover it. 00:30:28.359 [2024-11-20 08:27:42.351834] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.359 [2024-11-20 08:27:42.351886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.359 [2024-11-20 08:27:42.351899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.359 [2024-11-20 08:27:42.351906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.359 [2024-11-20 08:27:42.351912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.359 [2024-11-20 08:27:42.351926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.359 qpair failed and we were unable to recover it. 00:30:28.359 [2024-11-20 08:27:42.361876] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.359 [2024-11-20 08:27:42.361931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.359 [2024-11-20 08:27:42.361947] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.359 [2024-11-20 08:27:42.361958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.359 [2024-11-20 08:27:42.361964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.359 [2024-11-20 08:27:42.361980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.359 qpair failed and we were unable to recover it. 00:30:28.359 [2024-11-20 08:27:42.371830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.359 [2024-11-20 08:27:42.371883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.359 [2024-11-20 08:27:42.371897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.359 [2024-11-20 08:27:42.371904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.359 [2024-11-20 08:27:42.371910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.359 [2024-11-20 08:27:42.371925] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.359 qpair failed and we were unable to recover it. 00:30:28.620 [2024-11-20 08:27:42.381935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.620 [2024-11-20 08:27:42.382000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.620 [2024-11-20 08:27:42.382014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.620 [2024-11-20 08:27:42.382022] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.620 [2024-11-20 08:27:42.382028] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.620 [2024-11-20 08:27:42.382043] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.620 qpair failed and we were unable to recover it. 00:30:28.620 [2024-11-20 08:27:42.391991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.620 [2024-11-20 08:27:42.392051] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.620 [2024-11-20 08:27:42.392065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.620 [2024-11-20 08:27:42.392072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.620 [2024-11-20 08:27:42.392079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.620 [2024-11-20 08:27:42.392094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.620 qpair failed and we were unable to recover it. 00:30:28.620 [2024-11-20 08:27:42.402003] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.620 [2024-11-20 08:27:42.402071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.620 [2024-11-20 08:27:42.402085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.620 [2024-11-20 08:27:42.402092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.620 [2024-11-20 08:27:42.402099] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.620 [2024-11-20 08:27:42.402113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.620 qpair failed and we were unable to recover it. 00:30:28.620 [2024-11-20 08:27:42.412017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.620 [2024-11-20 08:27:42.412068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.620 [2024-11-20 08:27:42.412083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.620 [2024-11-20 08:27:42.412091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.620 [2024-11-20 08:27:42.412098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.620 [2024-11-20 08:27:42.412113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.620 qpair failed and we were unable to recover it. 00:30:28.620 [2024-11-20 08:27:42.422058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.620 [2024-11-20 08:27:42.422111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.620 [2024-11-20 08:27:42.422125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.620 [2024-11-20 08:27:42.422132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.620 [2024-11-20 08:27:42.422139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.620 [2024-11-20 08:27:42.422153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.620 qpair failed and we were unable to recover it. 00:30:28.620 [2024-11-20 08:27:42.432073] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.620 [2024-11-20 08:27:42.432126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.620 [2024-11-20 08:27:42.432140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.620 [2024-11-20 08:27:42.432147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.620 [2024-11-20 08:27:42.432153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.620 [2024-11-20 08:27:42.432168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.620 qpair failed and we were unable to recover it. 00:30:28.620 [2024-11-20 08:27:42.442140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.620 [2024-11-20 08:27:42.442239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.620 [2024-11-20 08:27:42.442256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.620 [2024-11-20 08:27:42.442262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.620 [2024-11-20 08:27:42.442268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.620 [2024-11-20 08:27:42.442283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.620 qpair failed and we were unable to recover it. 00:30:28.620 [2024-11-20 08:27:42.452143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.620 [2024-11-20 08:27:42.452198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.620 [2024-11-20 08:27:42.452216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.620 [2024-11-20 08:27:42.452223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.620 [2024-11-20 08:27:42.452229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.620 [2024-11-20 08:27:42.452244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.620 qpair failed and we were unable to recover it. 00:30:28.620 [2024-11-20 08:27:42.462157] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.620 [2024-11-20 08:27:42.462215] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.620 [2024-11-20 08:27:42.462229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.620 [2024-11-20 08:27:42.462235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.620 [2024-11-20 08:27:42.462242] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.620 [2024-11-20 08:27:42.462257] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.620 qpair failed and we were unable to recover it. 00:30:28.620 [2024-11-20 08:27:42.472184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.620 [2024-11-20 08:27:42.472243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.620 [2024-11-20 08:27:42.472257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.620 [2024-11-20 08:27:42.472264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.620 [2024-11-20 08:27:42.472270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.620 [2024-11-20 08:27:42.472285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.620 qpair failed and we were unable to recover it. 00:30:28.620 [2024-11-20 08:27:42.482218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.620 [2024-11-20 08:27:42.482277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.621 [2024-11-20 08:27:42.482291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.621 [2024-11-20 08:27:42.482298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.621 [2024-11-20 08:27:42.482304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.621 [2024-11-20 08:27:42.482320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.621 qpair failed and we were unable to recover it. 00:30:28.621 [2024-11-20 08:27:42.492250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.621 [2024-11-20 08:27:42.492306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.621 [2024-11-20 08:27:42.492320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.621 [2024-11-20 08:27:42.492331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.621 [2024-11-20 08:27:42.492338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.621 [2024-11-20 08:27:42.492352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.621 qpair failed and we were unable to recover it. 00:30:28.621 [2024-11-20 08:27:42.502256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.621 [2024-11-20 08:27:42.502320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.621 [2024-11-20 08:27:42.502334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.621 [2024-11-20 08:27:42.502342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.621 [2024-11-20 08:27:42.502348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.621 [2024-11-20 08:27:42.502363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.621 qpair failed and we were unable to recover it. 00:30:28.621 [2024-11-20 08:27:42.512292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.621 [2024-11-20 08:27:42.512348] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.621 [2024-11-20 08:27:42.512362] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.621 [2024-11-20 08:27:42.512369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.621 [2024-11-20 08:27:42.512375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.621 [2024-11-20 08:27:42.512389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.621 qpair failed and we were unable to recover it. 00:30:28.621 [2024-11-20 08:27:42.522381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.621 [2024-11-20 08:27:42.522439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.621 [2024-11-20 08:27:42.522456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.621 [2024-11-20 08:27:42.522464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.621 [2024-11-20 08:27:42.522471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.621 [2024-11-20 08:27:42.522485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.621 qpair failed and we were unable to recover it. 00:30:28.621 [2024-11-20 08:27:42.532354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.621 [2024-11-20 08:27:42.532406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.621 [2024-11-20 08:27:42.532420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.621 [2024-11-20 08:27:42.532427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.621 [2024-11-20 08:27:42.532433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.621 [2024-11-20 08:27:42.532448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.621 qpair failed and we were unable to recover it. 00:30:28.621 [2024-11-20 08:27:42.542386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.621 [2024-11-20 08:27:42.542448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.621 [2024-11-20 08:27:42.542462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.621 [2024-11-20 08:27:42.542469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.621 [2024-11-20 08:27:42.542476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.621 [2024-11-20 08:27:42.542491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.621 qpair failed and we were unable to recover it. 00:30:28.621 [2024-11-20 08:27:42.552409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.621 [2024-11-20 08:27:42.552464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.621 [2024-11-20 08:27:42.552480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.621 [2024-11-20 08:27:42.552487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.621 [2024-11-20 08:27:42.552493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.621 [2024-11-20 08:27:42.552508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.621 qpair failed and we were unable to recover it. 00:30:28.621 [2024-11-20 08:27:42.562394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.621 [2024-11-20 08:27:42.562449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.621 [2024-11-20 08:27:42.562464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.621 [2024-11-20 08:27:42.562471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.621 [2024-11-20 08:27:42.562478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.621 [2024-11-20 08:27:42.562493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.621 qpair failed and we were unable to recover it. 00:30:28.621 [2024-11-20 08:27:42.572489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.621 [2024-11-20 08:27:42.572543] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.621 [2024-11-20 08:27:42.572557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.621 [2024-11-20 08:27:42.572564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.621 [2024-11-20 08:27:42.572571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.621 [2024-11-20 08:27:42.572587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.621 qpair failed and we were unable to recover it. 00:30:28.621 [2024-11-20 08:27:42.582493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.621 [2024-11-20 08:27:42.582549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.621 [2024-11-20 08:27:42.582562] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.621 [2024-11-20 08:27:42.582570] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.621 [2024-11-20 08:27:42.582576] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.621 [2024-11-20 08:27:42.582591] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.621 qpair failed and we were unable to recover it. 00:30:28.621 [2024-11-20 08:27:42.592535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.621 [2024-11-20 08:27:42.592590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.621 [2024-11-20 08:27:42.592604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.621 [2024-11-20 08:27:42.592611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.621 [2024-11-20 08:27:42.592618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.621 [2024-11-20 08:27:42.592632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.621 qpair failed and we were unable to recover it. 00:30:28.621 [2024-11-20 08:27:42.602556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.621 [2024-11-20 08:27:42.602659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.621 [2024-11-20 08:27:42.602673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.621 [2024-11-20 08:27:42.602680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.621 [2024-11-20 08:27:42.602687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.621 [2024-11-20 08:27:42.602702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.621 qpair failed and we were unable to recover it. 00:30:28.621 [2024-11-20 08:27:42.612564] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.621 [2024-11-20 08:27:42.612617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.621 [2024-11-20 08:27:42.612630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.622 [2024-11-20 08:27:42.612637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.622 [2024-11-20 08:27:42.612644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.622 [2024-11-20 08:27:42.612659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.622 qpair failed and we were unable to recover it. 00:30:28.622 [2024-11-20 08:27:42.622636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.622 [2024-11-20 08:27:42.622694] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.622 [2024-11-20 08:27:42.622709] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.622 [2024-11-20 08:27:42.622720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.622 [2024-11-20 08:27:42.622726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.622 [2024-11-20 08:27:42.622740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.622 qpair failed and we were unable to recover it. 00:30:28.622 [2024-11-20 08:27:42.632573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.622 [2024-11-20 08:27:42.632634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.622 [2024-11-20 08:27:42.632648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.622 [2024-11-20 08:27:42.632655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.622 [2024-11-20 08:27:42.632662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.622 [2024-11-20 08:27:42.632676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.622 qpair failed and we were unable to recover it. 00:30:28.882 [2024-11-20 08:27:42.642643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.882 [2024-11-20 08:27:42.642720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.882 [2024-11-20 08:27:42.642734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.882 [2024-11-20 08:27:42.642740] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.882 [2024-11-20 08:27:42.642746] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.882 [2024-11-20 08:27:42.642761] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.882 qpair failed and we were unable to recover it. 00:30:28.882 [2024-11-20 08:27:42.652664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.882 [2024-11-20 08:27:42.652722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.882 [2024-11-20 08:27:42.652736] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.882 [2024-11-20 08:27:42.652744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.882 [2024-11-20 08:27:42.652751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.882 [2024-11-20 08:27:42.652766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.882 qpair failed and we were unable to recover it. 00:30:28.882 [2024-11-20 08:27:42.662731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.882 [2024-11-20 08:27:42.662788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.882 [2024-11-20 08:27:42.662803] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.882 [2024-11-20 08:27:42.662811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.882 [2024-11-20 08:27:42.662818] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.882 [2024-11-20 08:27:42.662833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.882 qpair failed and we were unable to recover it. 00:30:28.882 [2024-11-20 08:27:42.672715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.882 [2024-11-20 08:27:42.672782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.882 [2024-11-20 08:27:42.672797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.882 [2024-11-20 08:27:42.672804] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.882 [2024-11-20 08:27:42.672810] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.882 [2024-11-20 08:27:42.672826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.882 qpair failed and we were unable to recover it. 00:30:28.882 [2024-11-20 08:27:42.682729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.882 [2024-11-20 08:27:42.682786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.882 [2024-11-20 08:27:42.682800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.882 [2024-11-20 08:27:42.682806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.882 [2024-11-20 08:27:42.682813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.882 [2024-11-20 08:27:42.682828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.882 qpair failed and we were unable to recover it. 00:30:28.882 [2024-11-20 08:27:42.692812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.882 [2024-11-20 08:27:42.692866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.882 [2024-11-20 08:27:42.692881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.882 [2024-11-20 08:27:42.692888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.882 [2024-11-20 08:27:42.692894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.882 [2024-11-20 08:27:42.692908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.882 qpair failed and we were unable to recover it. 00:30:28.882 [2024-11-20 08:27:42.702778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.883 [2024-11-20 08:27:42.702839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.883 [2024-11-20 08:27:42.702854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.883 [2024-11-20 08:27:42.702860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.883 [2024-11-20 08:27:42.702867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.883 [2024-11-20 08:27:42.702881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.883 qpair failed and we were unable to recover it. 00:30:28.883 [2024-11-20 08:27:42.712843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.883 [2024-11-20 08:27:42.712896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.883 [2024-11-20 08:27:42.712910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.883 [2024-11-20 08:27:42.712917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.883 [2024-11-20 08:27:42.712923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.883 [2024-11-20 08:27:42.712937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.883 qpair failed and we were unable to recover it. 00:30:28.883 [2024-11-20 08:27:42.722828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.883 [2024-11-20 08:27:42.722881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.883 [2024-11-20 08:27:42.722896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.883 [2024-11-20 08:27:42.722903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.883 [2024-11-20 08:27:42.722909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.883 [2024-11-20 08:27:42.722923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.883 qpair failed and we were unable to recover it. 00:30:28.883 [2024-11-20 08:27:42.732968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.883 [2024-11-20 08:27:42.733036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.883 [2024-11-20 08:27:42.733054] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.883 [2024-11-20 08:27:42.733062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.883 [2024-11-20 08:27:42.733068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.883 [2024-11-20 08:27:42.733083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.883 qpair failed and we were unable to recover it. 00:30:28.883 [2024-11-20 08:27:42.742948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.883 [2024-11-20 08:27:42.743021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.883 [2024-11-20 08:27:42.743036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.883 [2024-11-20 08:27:42.743043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.883 [2024-11-20 08:27:42.743049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.883 [2024-11-20 08:27:42.743064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.883 qpair failed and we were unable to recover it. 00:30:28.883 [2024-11-20 08:27:42.752982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.883 [2024-11-20 08:27:42.753037] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.883 [2024-11-20 08:27:42.753051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.883 [2024-11-20 08:27:42.753061] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.883 [2024-11-20 08:27:42.753068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.883 [2024-11-20 08:27:42.753082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.883 qpair failed and we were unable to recover it. 00:30:28.883 [2024-11-20 08:27:42.763033] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.883 [2024-11-20 08:27:42.763091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.883 [2024-11-20 08:27:42.763105] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.883 [2024-11-20 08:27:42.763112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.883 [2024-11-20 08:27:42.763119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.883 [2024-11-20 08:27:42.763133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.883 qpair failed and we were unable to recover it. 00:30:28.883 [2024-11-20 08:27:42.773037] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.883 [2024-11-20 08:27:42.773091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.883 [2024-11-20 08:27:42.773106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.883 [2024-11-20 08:27:42.773113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.883 [2024-11-20 08:27:42.773119] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.883 [2024-11-20 08:27:42.773134] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.883 qpair failed and we were unable to recover it. 00:30:28.883 [2024-11-20 08:27:42.782991] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.883 [2024-11-20 08:27:42.783045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.883 [2024-11-20 08:27:42.783059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.883 [2024-11-20 08:27:42.783066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.883 [2024-11-20 08:27:42.783072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.883 [2024-11-20 08:27:42.783086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.883 qpair failed and we were unable to recover it. 00:30:28.883 [2024-11-20 08:27:42.793082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.883 [2024-11-20 08:27:42.793134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.883 [2024-11-20 08:27:42.793148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.883 [2024-11-20 08:27:42.793155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.883 [2024-11-20 08:27:42.793161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.883 [2024-11-20 08:27:42.793179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.883 qpair failed and we were unable to recover it. 00:30:28.883 [2024-11-20 08:27:42.803119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.883 [2024-11-20 08:27:42.803175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.883 [2024-11-20 08:27:42.803190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.883 [2024-11-20 08:27:42.803197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.883 [2024-11-20 08:27:42.803209] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.883 [2024-11-20 08:27:42.803224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.883 qpair failed and we were unable to recover it. 00:30:28.883 [2024-11-20 08:27:42.813123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.883 [2024-11-20 08:27:42.813182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.883 [2024-11-20 08:27:42.813196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.883 [2024-11-20 08:27:42.813209] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.883 [2024-11-20 08:27:42.813215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.883 [2024-11-20 08:27:42.813230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.883 qpair failed and we were unable to recover it. 00:30:28.883 [2024-11-20 08:27:42.823163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.883 [2024-11-20 08:27:42.823230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.883 [2024-11-20 08:27:42.823245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.883 [2024-11-20 08:27:42.823252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.883 [2024-11-20 08:27:42.823258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.883 [2024-11-20 08:27:42.823273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.883 qpair failed and we were unable to recover it. 00:30:28.883 [2024-11-20 08:27:42.833197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.884 [2024-11-20 08:27:42.833259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.884 [2024-11-20 08:27:42.833274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.884 [2024-11-20 08:27:42.833281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.884 [2024-11-20 08:27:42.833287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.884 [2024-11-20 08:27:42.833302] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.884 qpair failed and we were unable to recover it. 00:30:28.884 [2024-11-20 08:27:42.843230] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.884 [2024-11-20 08:27:42.843296] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.884 [2024-11-20 08:27:42.843310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.884 [2024-11-20 08:27:42.843317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.884 [2024-11-20 08:27:42.843324] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.884 [2024-11-20 08:27:42.843339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.884 qpair failed and we were unable to recover it. 00:30:28.884 [2024-11-20 08:27:42.853282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.884 [2024-11-20 08:27:42.853337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.884 [2024-11-20 08:27:42.853352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.884 [2024-11-20 08:27:42.853359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.884 [2024-11-20 08:27:42.853365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.884 [2024-11-20 08:27:42.853379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.884 qpair failed and we were unable to recover it. 00:30:28.884 [2024-11-20 08:27:42.863296] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.884 [2024-11-20 08:27:42.863355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.884 [2024-11-20 08:27:42.863370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.884 [2024-11-20 08:27:42.863377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.884 [2024-11-20 08:27:42.863383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.884 [2024-11-20 08:27:42.863398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.884 qpair failed and we were unable to recover it. 00:30:28.884 [2024-11-20 08:27:42.873305] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.884 [2024-11-20 08:27:42.873357] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.884 [2024-11-20 08:27:42.873373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.884 [2024-11-20 08:27:42.873380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.884 [2024-11-20 08:27:42.873387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.884 [2024-11-20 08:27:42.873402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.884 qpair failed and we were unable to recover it. 00:30:28.884 [2024-11-20 08:27:42.883364] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.884 [2024-11-20 08:27:42.883419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.884 [2024-11-20 08:27:42.883434] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.884 [2024-11-20 08:27:42.883444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.884 [2024-11-20 08:27:42.883451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.884 [2024-11-20 08:27:42.883466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.884 qpair failed and we were unable to recover it. 00:30:28.884 [2024-11-20 08:27:42.893389] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.884 [2024-11-20 08:27:42.893492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.884 [2024-11-20 08:27:42.893505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.884 [2024-11-20 08:27:42.893512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.884 [2024-11-20 08:27:42.893519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.884 [2024-11-20 08:27:42.893534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.884 qpair failed and we were unable to recover it. 00:30:28.884 [2024-11-20 08:27:42.903434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:28.884 [2024-11-20 08:27:42.903528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:28.884 [2024-11-20 08:27:42.903543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:28.884 [2024-11-20 08:27:42.903550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:28.884 [2024-11-20 08:27:42.903556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:28.884 [2024-11-20 08:27:42.903571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:28.884 qpair failed and we were unable to recover it. 00:30:29.144 [2024-11-20 08:27:42.913423] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.144 [2024-11-20 08:27:42.913477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.144 [2024-11-20 08:27:42.913491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.144 [2024-11-20 08:27:42.913498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.144 [2024-11-20 08:27:42.913504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.144 [2024-11-20 08:27:42.913519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.144 qpair failed and we were unable to recover it. 00:30:29.144 [2024-11-20 08:27:42.923416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.145 [2024-11-20 08:27:42.923473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.145 [2024-11-20 08:27:42.923487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.145 [2024-11-20 08:27:42.923494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.145 [2024-11-20 08:27:42.923500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.145 [2024-11-20 08:27:42.923517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.145 qpair failed and we were unable to recover it. 00:30:29.145 [2024-11-20 08:27:42.933502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.145 [2024-11-20 08:27:42.933580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.145 [2024-11-20 08:27:42.933595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.145 [2024-11-20 08:27:42.933602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.145 [2024-11-20 08:27:42.933608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.145 [2024-11-20 08:27:42.933622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.145 qpair failed and we were unable to recover it. 00:30:29.145 [2024-11-20 08:27:42.943534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.145 [2024-11-20 08:27:42.943589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.145 [2024-11-20 08:27:42.943604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.145 [2024-11-20 08:27:42.943612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.145 [2024-11-20 08:27:42.943618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.145 [2024-11-20 08:27:42.943633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.145 qpair failed and we were unable to recover it. 00:30:29.145 [2024-11-20 08:27:42.953546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.145 [2024-11-20 08:27:42.953615] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.145 [2024-11-20 08:27:42.953629] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.145 [2024-11-20 08:27:42.953636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.145 [2024-11-20 08:27:42.953642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.145 [2024-11-20 08:27:42.953657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.145 qpair failed and we were unable to recover it. 00:30:29.145 [2024-11-20 08:27:42.963510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.145 [2024-11-20 08:27:42.963567] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.145 [2024-11-20 08:27:42.963581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.145 [2024-11-20 08:27:42.963588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.145 [2024-11-20 08:27:42.963595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.145 [2024-11-20 08:27:42.963609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.145 qpair failed and we were unable to recover it. 00:30:29.145 [2024-11-20 08:27:42.973651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.145 [2024-11-20 08:27:42.973712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.145 [2024-11-20 08:27:42.973726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.145 [2024-11-20 08:27:42.973733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.145 [2024-11-20 08:27:42.973740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.145 [2024-11-20 08:27:42.973754] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.145 qpair failed and we were unable to recover it. 00:30:29.145 [2024-11-20 08:27:42.983629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.145 [2024-11-20 08:27:42.983685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.145 [2024-11-20 08:27:42.983699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.145 [2024-11-20 08:27:42.983705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.145 [2024-11-20 08:27:42.983712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.145 [2024-11-20 08:27:42.983727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.145 qpair failed and we were unable to recover it. 00:30:29.145 [2024-11-20 08:27:42.993654] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.145 [2024-11-20 08:27:42.993709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.145 [2024-11-20 08:27:42.993723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.145 [2024-11-20 08:27:42.993729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.145 [2024-11-20 08:27:42.993736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.145 [2024-11-20 08:27:42.993750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.145 qpair failed and we were unable to recover it. 00:30:29.145 [2024-11-20 08:27:43.003690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.145 [2024-11-20 08:27:43.003749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.145 [2024-11-20 08:27:43.003762] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.145 [2024-11-20 08:27:43.003770] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.145 [2024-11-20 08:27:43.003777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.145 [2024-11-20 08:27:43.003792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.145 qpair failed and we were unable to recover it. 00:30:29.145 [2024-11-20 08:27:43.013712] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.145 [2024-11-20 08:27:43.013772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.145 [2024-11-20 08:27:43.013786] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.145 [2024-11-20 08:27:43.013796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.145 [2024-11-20 08:27:43.013802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.145 [2024-11-20 08:27:43.013817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.145 qpair failed and we were unable to recover it. 00:30:29.145 [2024-11-20 08:27:43.023744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.145 [2024-11-20 08:27:43.023814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.145 [2024-11-20 08:27:43.023828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.145 [2024-11-20 08:27:43.023835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.145 [2024-11-20 08:27:43.023842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.145 [2024-11-20 08:27:43.023857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.145 qpair failed and we were unable to recover it. 00:30:29.145 [2024-11-20 08:27:43.033810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.145 [2024-11-20 08:27:43.033864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.145 [2024-11-20 08:27:43.033877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.145 [2024-11-20 08:27:43.033884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.145 [2024-11-20 08:27:43.033891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.145 [2024-11-20 08:27:43.033904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.145 qpair failed and we were unable to recover it. 00:30:29.145 [2024-11-20 08:27:43.043803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.145 [2024-11-20 08:27:43.043862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.145 [2024-11-20 08:27:43.043876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.145 [2024-11-20 08:27:43.043882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.145 [2024-11-20 08:27:43.043889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.145 [2024-11-20 08:27:43.043904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.146 qpair failed and we were unable to recover it. 00:30:29.146 [2024-11-20 08:27:43.053827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.146 [2024-11-20 08:27:43.053886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.146 [2024-11-20 08:27:43.053900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.146 [2024-11-20 08:27:43.053908] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.146 [2024-11-20 08:27:43.053914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.146 [2024-11-20 08:27:43.053931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.146 qpair failed and we were unable to recover it. 00:30:29.146 [2024-11-20 08:27:43.063859] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.146 [2024-11-20 08:27:43.063925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.146 [2024-11-20 08:27:43.063940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.146 [2024-11-20 08:27:43.063948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.146 [2024-11-20 08:27:43.063955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.146 [2024-11-20 08:27:43.063970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.146 qpair failed and we were unable to recover it. 00:30:29.146 [2024-11-20 08:27:43.073913] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.146 [2024-11-20 08:27:43.073982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.146 [2024-11-20 08:27:43.073997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.146 [2024-11-20 08:27:43.074004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.146 [2024-11-20 08:27:43.074010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.146 [2024-11-20 08:27:43.074024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.146 qpair failed and we were unable to recover it. 00:30:29.146 [2024-11-20 08:27:43.083974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.146 [2024-11-20 08:27:43.084080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.146 [2024-11-20 08:27:43.084095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.146 [2024-11-20 08:27:43.084102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.146 [2024-11-20 08:27:43.084109] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.146 [2024-11-20 08:27:43.084124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.146 qpair failed and we were unable to recover it. 00:30:29.146 [2024-11-20 08:27:43.093877] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.146 [2024-11-20 08:27:43.093932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.146 [2024-11-20 08:27:43.093946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.146 [2024-11-20 08:27:43.093953] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.146 [2024-11-20 08:27:43.093960] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.146 [2024-11-20 08:27:43.093975] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.146 qpair failed and we were unable to recover it. 00:30:29.146 [2024-11-20 08:27:43.103982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.146 [2024-11-20 08:27:43.104039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.146 [2024-11-20 08:27:43.104053] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.146 [2024-11-20 08:27:43.104060] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.146 [2024-11-20 08:27:43.104067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.146 [2024-11-20 08:27:43.104081] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.146 qpair failed and we were unable to recover it. 00:30:29.146 [2024-11-20 08:27:43.114034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.146 [2024-11-20 08:27:43.114096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.146 [2024-11-20 08:27:43.114111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.146 [2024-11-20 08:27:43.114118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.146 [2024-11-20 08:27:43.114124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.146 [2024-11-20 08:27:43.114139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.146 qpair failed and we were unable to recover it. 00:30:29.146 [2024-11-20 08:27:43.124038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.146 [2024-11-20 08:27:43.124097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.146 [2024-11-20 08:27:43.124111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.146 [2024-11-20 08:27:43.124118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.146 [2024-11-20 08:27:43.124125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.146 [2024-11-20 08:27:43.124139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.146 qpair failed and we were unable to recover it. 00:30:29.146 [2024-11-20 08:27:43.134064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.146 [2024-11-20 08:27:43.134116] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.146 [2024-11-20 08:27:43.134130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.146 [2024-11-20 08:27:43.134137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.146 [2024-11-20 08:27:43.134144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.146 [2024-11-20 08:27:43.134158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.146 qpair failed and we were unable to recover it. 00:30:29.146 [2024-11-20 08:27:43.144089] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.146 [2024-11-20 08:27:43.144146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.146 [2024-11-20 08:27:43.144160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.146 [2024-11-20 08:27:43.144171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.146 [2024-11-20 08:27:43.144177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.146 [2024-11-20 08:27:43.144192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.146 qpair failed and we were unable to recover it. 00:30:29.146 [2024-11-20 08:27:43.154152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.146 [2024-11-20 08:27:43.154211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.146 [2024-11-20 08:27:43.154226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.146 [2024-11-20 08:27:43.154233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.146 [2024-11-20 08:27:43.154239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.146 [2024-11-20 08:27:43.154254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.146 qpair failed and we were unable to recover it. 00:30:29.146 [2024-11-20 08:27:43.164150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.146 [2024-11-20 08:27:43.164209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.146 [2024-11-20 08:27:43.164223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.146 [2024-11-20 08:27:43.164231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.146 [2024-11-20 08:27:43.164236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.146 [2024-11-20 08:27:43.164251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.146 qpair failed and we were unable to recover it. 00:30:29.406 [2024-11-20 08:27:43.174182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.407 [2024-11-20 08:27:43.174262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.407 [2024-11-20 08:27:43.174276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.407 [2024-11-20 08:27:43.174283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.407 [2024-11-20 08:27:43.174289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.407 [2024-11-20 08:27:43.174303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.407 qpair failed and we were unable to recover it. 00:30:29.407 [2024-11-20 08:27:43.184232] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.407 [2024-11-20 08:27:43.184306] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.407 [2024-11-20 08:27:43.184321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.407 [2024-11-20 08:27:43.184329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.407 [2024-11-20 08:27:43.184335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.407 [2024-11-20 08:27:43.184353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.407 qpair failed and we were unable to recover it. 00:30:29.407 [2024-11-20 08:27:43.194279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.407 [2024-11-20 08:27:43.194333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.407 [2024-11-20 08:27:43.194347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.407 [2024-11-20 08:27:43.194354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.407 [2024-11-20 08:27:43.194360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.407 [2024-11-20 08:27:43.194374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.407 qpair failed and we were unable to recover it. 00:30:29.407 [2024-11-20 08:27:43.204268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.407 [2024-11-20 08:27:43.204328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.407 [2024-11-20 08:27:43.204343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.407 [2024-11-20 08:27:43.204350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.407 [2024-11-20 08:27:43.204357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.407 [2024-11-20 08:27:43.204371] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.407 qpair failed and we were unable to recover it. 00:30:29.407 [2024-11-20 08:27:43.214323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.407 [2024-11-20 08:27:43.214382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.407 [2024-11-20 08:27:43.214397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.407 [2024-11-20 08:27:43.214405] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.407 [2024-11-20 08:27:43.214412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.407 [2024-11-20 08:27:43.214426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.407 qpair failed and we were unable to recover it. 00:30:29.407 [2024-11-20 08:27:43.224313] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.407 [2024-11-20 08:27:43.224376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.407 [2024-11-20 08:27:43.224391] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.407 [2024-11-20 08:27:43.224398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.407 [2024-11-20 08:27:43.224404] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.407 [2024-11-20 08:27:43.224418] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.407 qpair failed and we were unable to recover it. 00:30:29.407 [2024-11-20 08:27:43.234400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.407 [2024-11-20 08:27:43.234456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.407 [2024-11-20 08:27:43.234470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.407 [2024-11-20 08:27:43.234477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.407 [2024-11-20 08:27:43.234485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.407 [2024-11-20 08:27:43.234500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.407 qpair failed and we were unable to recover it. 00:30:29.407 [2024-11-20 08:27:43.244385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.407 [2024-11-20 08:27:43.244441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.407 [2024-11-20 08:27:43.244455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.407 [2024-11-20 08:27:43.244462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.407 [2024-11-20 08:27:43.244469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.407 [2024-11-20 08:27:43.244484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.407 qpair failed and we were unable to recover it. 00:30:29.407 [2024-11-20 08:27:43.254415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.407 [2024-11-20 08:27:43.254472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.407 [2024-11-20 08:27:43.254486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.407 [2024-11-20 08:27:43.254493] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.407 [2024-11-20 08:27:43.254500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.407 [2024-11-20 08:27:43.254513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.407 qpair failed and we were unable to recover it. 00:30:29.407 [2024-11-20 08:27:43.264480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.407 [2024-11-20 08:27:43.264535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.407 [2024-11-20 08:27:43.264549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.407 [2024-11-20 08:27:43.264557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.407 [2024-11-20 08:27:43.264563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.407 [2024-11-20 08:27:43.264577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.407 qpair failed and we were unable to recover it. 00:30:29.407 [2024-11-20 08:27:43.274453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.407 [2024-11-20 08:27:43.274508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.407 [2024-11-20 08:27:43.274521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.407 [2024-11-20 08:27:43.274534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.407 [2024-11-20 08:27:43.274540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.407 [2024-11-20 08:27:43.274555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.407 qpair failed and we were unable to recover it. 00:30:29.407 [2024-11-20 08:27:43.284497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.407 [2024-11-20 08:27:43.284568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.407 [2024-11-20 08:27:43.284582] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.407 [2024-11-20 08:27:43.284588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.407 [2024-11-20 08:27:43.284595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.407 [2024-11-20 08:27:43.284610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.407 qpair failed and we were unable to recover it. 00:30:29.407 [2024-11-20 08:27:43.294516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.407 [2024-11-20 08:27:43.294574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.407 [2024-11-20 08:27:43.294588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.407 [2024-11-20 08:27:43.294595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.407 [2024-11-20 08:27:43.294602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.407 [2024-11-20 08:27:43.294616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.407 qpair failed and we were unable to recover it. 00:30:29.408 [2024-11-20 08:27:43.304591] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.408 [2024-11-20 08:27:43.304650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.408 [2024-11-20 08:27:43.304663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.408 [2024-11-20 08:27:43.304670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.408 [2024-11-20 08:27:43.304676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.408 [2024-11-20 08:27:43.304691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-11-20 08:27:43.314611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.408 [2024-11-20 08:27:43.314667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.408 [2024-11-20 08:27:43.314681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.408 [2024-11-20 08:27:43.314688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.408 [2024-11-20 08:27:43.314695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.408 [2024-11-20 08:27:43.314712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-11-20 08:27:43.324611] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.408 [2024-11-20 08:27:43.324667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.408 [2024-11-20 08:27:43.324681] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.408 [2024-11-20 08:27:43.324688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.408 [2024-11-20 08:27:43.324695] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.408 [2024-11-20 08:27:43.324709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-11-20 08:27:43.334628] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.408 [2024-11-20 08:27:43.334679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.408 [2024-11-20 08:27:43.334693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.408 [2024-11-20 08:27:43.334701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.408 [2024-11-20 08:27:43.334707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.408 [2024-11-20 08:27:43.334721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-11-20 08:27:43.344680] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.408 [2024-11-20 08:27:43.344736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.408 [2024-11-20 08:27:43.344750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.408 [2024-11-20 08:27:43.344757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.408 [2024-11-20 08:27:43.344764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.408 [2024-11-20 08:27:43.344778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-11-20 08:27:43.354701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.408 [2024-11-20 08:27:43.354784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.408 [2024-11-20 08:27:43.354800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.408 [2024-11-20 08:27:43.354807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.408 [2024-11-20 08:27:43.354814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.408 [2024-11-20 08:27:43.354829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-11-20 08:27:43.364716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.408 [2024-11-20 08:27:43.364774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.408 [2024-11-20 08:27:43.364788] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.408 [2024-11-20 08:27:43.364795] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.408 [2024-11-20 08:27:43.364801] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.408 [2024-11-20 08:27:43.364816] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-11-20 08:27:43.374778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.408 [2024-11-20 08:27:43.374835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.408 [2024-11-20 08:27:43.374850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.408 [2024-11-20 08:27:43.374857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.408 [2024-11-20 08:27:43.374863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.408 [2024-11-20 08:27:43.374877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-11-20 08:27:43.384829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.408 [2024-11-20 08:27:43.384884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.408 [2024-11-20 08:27:43.384900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.408 [2024-11-20 08:27:43.384907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.408 [2024-11-20 08:27:43.384913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.408 [2024-11-20 08:27:43.384929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-11-20 08:27:43.394822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.408 [2024-11-20 08:27:43.394878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.408 [2024-11-20 08:27:43.394892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.408 [2024-11-20 08:27:43.394899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.408 [2024-11-20 08:27:43.394906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.408 [2024-11-20 08:27:43.394921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-11-20 08:27:43.404757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.408 [2024-11-20 08:27:43.404811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.408 [2024-11-20 08:27:43.404825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.408 [2024-11-20 08:27:43.404835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.408 [2024-11-20 08:27:43.404842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.408 [2024-11-20 08:27:43.404857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-11-20 08:27:43.414856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.408 [2024-11-20 08:27:43.414918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.408 [2024-11-20 08:27:43.414932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.408 [2024-11-20 08:27:43.414940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.408 [2024-11-20 08:27:43.414946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.408 [2024-11-20 08:27:43.414961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.408 qpair failed and we were unable to recover it. 00:30:29.408 [2024-11-20 08:27:43.424896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.408 [2024-11-20 08:27:43.424953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.408 [2024-11-20 08:27:43.424967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.408 [2024-11-20 08:27:43.424974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.408 [2024-11-20 08:27:43.424980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.409 [2024-11-20 08:27:43.424995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.409 qpair failed and we were unable to recover it. 00:30:29.668 [2024-11-20 08:27:43.434916] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.668 [2024-11-20 08:27:43.434969] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.668 [2024-11-20 08:27:43.434982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.668 [2024-11-20 08:27:43.434989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.668 [2024-11-20 08:27:43.434995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.668 [2024-11-20 08:27:43.435011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.668 qpair failed and we were unable to recover it. 00:30:29.668 [2024-11-20 08:27:43.444950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.668 [2024-11-20 08:27:43.445004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.668 [2024-11-20 08:27:43.445018] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.668 [2024-11-20 08:27:43.445025] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.668 [2024-11-20 08:27:43.445032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.668 [2024-11-20 08:27:43.445050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.668 qpair failed and we were unable to recover it. 00:30:29.668 [2024-11-20 08:27:43.454988] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.668 [2024-11-20 08:27:43.455041] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.668 [2024-11-20 08:27:43.455055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.668 [2024-11-20 08:27:43.455062] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.668 [2024-11-20 08:27:43.455068] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.668 [2024-11-20 08:27:43.455083] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.668 qpair failed and we were unable to recover it. 00:30:29.668 [2024-11-20 08:27:43.465014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.668 [2024-11-20 08:27:43.465068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.668 [2024-11-20 08:27:43.465084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.668 [2024-11-20 08:27:43.465091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.668 [2024-11-20 08:27:43.465097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.668 [2024-11-20 08:27:43.465113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.668 qpair failed and we were unable to recover it. 00:30:29.668 [2024-11-20 08:27:43.475056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.668 [2024-11-20 08:27:43.475109] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.668 [2024-11-20 08:27:43.475124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.668 [2024-11-20 08:27:43.475133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.668 [2024-11-20 08:27:43.475139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.668 [2024-11-20 08:27:43.475153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.668 qpair failed and we were unable to recover it. 00:30:29.668 [2024-11-20 08:27:43.485093] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.668 [2024-11-20 08:27:43.485172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.668 [2024-11-20 08:27:43.485186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.668 [2024-11-20 08:27:43.485193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.668 [2024-11-20 08:27:43.485203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.668 [2024-11-20 08:27:43.485219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.668 qpair failed and we were unable to recover it. 00:30:29.668 [2024-11-20 08:27:43.495120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.668 [2024-11-20 08:27:43.495176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.668 [2024-11-20 08:27:43.495190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.668 [2024-11-20 08:27:43.495197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.668 [2024-11-20 08:27:43.495206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.668 [2024-11-20 08:27:43.495221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.668 qpair failed and we were unable to recover it. 00:30:29.668 [2024-11-20 08:27:43.505166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.668 [2024-11-20 08:27:43.505220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.668 [2024-11-20 08:27:43.505235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.668 [2024-11-20 08:27:43.505242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.668 [2024-11-20 08:27:43.505249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.668 [2024-11-20 08:27:43.505263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.668 qpair failed and we were unable to recover it. 00:30:29.668 [2024-11-20 08:27:43.515140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.668 [2024-11-20 08:27:43.515221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.668 [2024-11-20 08:27:43.515236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.668 [2024-11-20 08:27:43.515243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.668 [2024-11-20 08:27:43.515249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.668 [2024-11-20 08:27:43.515264] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.668 qpair failed and we were unable to recover it. 00:30:29.668 [2024-11-20 08:27:43.525233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.668 [2024-11-20 08:27:43.525331] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.668 [2024-11-20 08:27:43.525347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.668 [2024-11-20 08:27:43.525354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.668 [2024-11-20 08:27:43.525361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.668 [2024-11-20 08:27:43.525375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.668 qpair failed and we were unable to recover it. 00:30:29.668 [2024-11-20 08:27:43.535212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.668 [2024-11-20 08:27:43.535268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.668 [2024-11-20 08:27:43.535283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.668 [2024-11-20 08:27:43.535294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.668 [2024-11-20 08:27:43.535301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.668 [2024-11-20 08:27:43.535316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.668 qpair failed and we were unable to recover it. 00:30:29.668 [2024-11-20 08:27:43.545246] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.668 [2024-11-20 08:27:43.545302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.668 [2024-11-20 08:27:43.545320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.668 [2024-11-20 08:27:43.545328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.669 [2024-11-20 08:27:43.545334] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.669 [2024-11-20 08:27:43.545352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.669 qpair failed and we were unable to recover it. 00:30:29.669 [2024-11-20 08:27:43.555267] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.669 [2024-11-20 08:27:43.555318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.669 [2024-11-20 08:27:43.555334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.669 [2024-11-20 08:27:43.555341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.669 [2024-11-20 08:27:43.555349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.669 [2024-11-20 08:27:43.555365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.669 qpair failed and we were unable to recover it. 00:30:29.669 [2024-11-20 08:27:43.565252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.669 [2024-11-20 08:27:43.565328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.669 [2024-11-20 08:27:43.565344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.669 [2024-11-20 08:27:43.565352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.669 [2024-11-20 08:27:43.565359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.669 [2024-11-20 08:27:43.565374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.669 qpair failed and we were unable to recover it. 00:30:29.669 [2024-11-20 08:27:43.575320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.669 [2024-11-20 08:27:43.575375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.669 [2024-11-20 08:27:43.575390] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.669 [2024-11-20 08:27:43.575398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.669 [2024-11-20 08:27:43.575405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.669 [2024-11-20 08:27:43.575424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.669 qpair failed and we were unable to recover it. 00:30:29.669 [2024-11-20 08:27:43.585355] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.669 [2024-11-20 08:27:43.585419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.669 [2024-11-20 08:27:43.585435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.669 [2024-11-20 08:27:43.585442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.669 [2024-11-20 08:27:43.585450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.669 [2024-11-20 08:27:43.585465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.669 qpair failed and we were unable to recover it. 00:30:29.669 [2024-11-20 08:27:43.595374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.669 [2024-11-20 08:27:43.595446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.669 [2024-11-20 08:27:43.595462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.669 [2024-11-20 08:27:43.595470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.669 [2024-11-20 08:27:43.595476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.669 [2024-11-20 08:27:43.595491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.669 qpair failed and we were unable to recover it. 00:30:29.669 [2024-11-20 08:27:43.605407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.669 [2024-11-20 08:27:43.605467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.669 [2024-11-20 08:27:43.605484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.669 [2024-11-20 08:27:43.605492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.669 [2024-11-20 08:27:43.605498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.669 [2024-11-20 08:27:43.605516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.669 qpair failed and we were unable to recover it. 00:30:29.669 [2024-11-20 08:27:43.615463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.669 [2024-11-20 08:27:43.615521] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.669 [2024-11-20 08:27:43.615537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.669 [2024-11-20 08:27:43.615545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.669 [2024-11-20 08:27:43.615552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.669 [2024-11-20 08:27:43.615566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.669 qpair failed and we were unable to recover it. 00:30:29.669 [2024-11-20 08:27:43.625494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.669 [2024-11-20 08:27:43.625572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.669 [2024-11-20 08:27:43.625587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.669 [2024-11-20 08:27:43.625594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.669 [2024-11-20 08:27:43.625601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.669 [2024-11-20 08:27:43.625617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.669 qpair failed and we were unable to recover it. 00:30:29.669 [2024-11-20 08:27:43.635492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.669 [2024-11-20 08:27:43.635548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.669 [2024-11-20 08:27:43.635564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.669 [2024-11-20 08:27:43.635571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.669 [2024-11-20 08:27:43.635579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.669 [2024-11-20 08:27:43.635596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.669 qpair failed and we were unable to recover it. 00:30:29.669 [2024-11-20 08:27:43.645532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.669 [2024-11-20 08:27:43.645587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.669 [2024-11-20 08:27:43.645602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.669 [2024-11-20 08:27:43.645610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.669 [2024-11-20 08:27:43.645616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.669 [2024-11-20 08:27:43.645633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.669 qpair failed and we were unable to recover it. 00:30:29.669 [2024-11-20 08:27:43.655529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.669 [2024-11-20 08:27:43.655587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.669 [2024-11-20 08:27:43.655602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.669 [2024-11-20 08:27:43.655610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.669 [2024-11-20 08:27:43.655617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.669 [2024-11-20 08:27:43.655632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.669 qpair failed and we were unable to recover it. 00:30:29.669 [2024-11-20 08:27:43.665613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.669 [2024-11-20 08:27:43.665665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.669 [2024-11-20 08:27:43.665680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.669 [2024-11-20 08:27:43.665690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.669 [2024-11-20 08:27:43.665697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.669 [2024-11-20 08:27:43.665713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.669 qpair failed and we were unable to recover it. 00:30:29.669 [2024-11-20 08:27:43.675616] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.669 [2024-11-20 08:27:43.675711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.669 [2024-11-20 08:27:43.675726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.669 [2024-11-20 08:27:43.675732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.669 [2024-11-20 08:27:43.675739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.670 [2024-11-20 08:27:43.675755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.670 qpair failed and we were unable to recover it. 00:30:29.670 [2024-11-20 08:27:43.685658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.670 [2024-11-20 08:27:43.685732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.670 [2024-11-20 08:27:43.685747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.670 [2024-11-20 08:27:43.685755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.670 [2024-11-20 08:27:43.685762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.670 [2024-11-20 08:27:43.685778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.670 qpair failed and we were unable to recover it. 00:30:29.929 [2024-11-20 08:27:43.695643] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.929 [2024-11-20 08:27:43.695737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.929 [2024-11-20 08:27:43.695752] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.929 [2024-11-20 08:27:43.695760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.929 [2024-11-20 08:27:43.695767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.929 [2024-11-20 08:27:43.695784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.929 qpair failed and we were unable to recover it. 00:30:29.929 [2024-11-20 08:27:43.705702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.929 [2024-11-20 08:27:43.705758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.929 [2024-11-20 08:27:43.705773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.929 [2024-11-20 08:27:43.705780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.929 [2024-11-20 08:27:43.705787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.929 [2024-11-20 08:27:43.705806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.929 qpair failed and we were unable to recover it. 00:30:29.929 [2024-11-20 08:27:43.715722] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.929 [2024-11-20 08:27:43.715774] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.929 [2024-11-20 08:27:43.715791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.929 [2024-11-20 08:27:43.715798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.929 [2024-11-20 08:27:43.715805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.929 [2024-11-20 08:27:43.715820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.929 qpair failed and we were unable to recover it. 00:30:29.929 [2024-11-20 08:27:43.725755] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.929 [2024-11-20 08:27:43.725826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.929 [2024-11-20 08:27:43.725843] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.929 [2024-11-20 08:27:43.725850] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.929 [2024-11-20 08:27:43.725858] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.929 [2024-11-20 08:27:43.725875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.929 qpair failed and we were unable to recover it. 00:30:29.929 [2024-11-20 08:27:43.735781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.929 [2024-11-20 08:27:43.735834] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.929 [2024-11-20 08:27:43.735850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.929 [2024-11-20 08:27:43.735859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.929 [2024-11-20 08:27:43.735866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.929 [2024-11-20 08:27:43.735882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.929 qpair failed and we were unable to recover it. 00:30:29.929 [2024-11-20 08:27:43.745833] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.929 [2024-11-20 08:27:43.745884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.929 [2024-11-20 08:27:43.745900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.929 [2024-11-20 08:27:43.745907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.929 [2024-11-20 08:27:43.745914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.929 [2024-11-20 08:27:43.745929] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.929 qpair failed and we were unable to recover it. 00:30:29.929 [2024-11-20 08:27:43.755829] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.929 [2024-11-20 08:27:43.755882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.929 [2024-11-20 08:27:43.755899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.929 [2024-11-20 08:27:43.755907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.929 [2024-11-20 08:27:43.755914] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.929 [2024-11-20 08:27:43.755930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.929 qpair failed and we were unable to recover it. 00:30:29.929 [2024-11-20 08:27:43.765798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.929 [2024-11-20 08:27:43.765856] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.929 [2024-11-20 08:27:43.765871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.929 [2024-11-20 08:27:43.765878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.929 [2024-11-20 08:27:43.765885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.929 [2024-11-20 08:27:43.765901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.929 qpair failed and we were unable to recover it. 00:30:29.929 [2024-11-20 08:27:43.775897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.929 [2024-11-20 08:27:43.775982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.929 [2024-11-20 08:27:43.775998] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.929 [2024-11-20 08:27:43.776005] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.929 [2024-11-20 08:27:43.776011] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.929 [2024-11-20 08:27:43.776027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.929 qpair failed and we were unable to recover it. 00:30:29.929 [2024-11-20 08:27:43.785919] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.929 [2024-11-20 08:27:43.785970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.929 [2024-11-20 08:27:43.785986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.929 [2024-11-20 08:27:43.785993] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.929 [2024-11-20 08:27:43.786000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.929 [2024-11-20 08:27:43.786015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.929 qpair failed and we were unable to recover it. 00:30:29.929 [2024-11-20 08:27:43.795955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.929 [2024-11-20 08:27:43.796008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.929 [2024-11-20 08:27:43.796027] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.929 [2024-11-20 08:27:43.796035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.929 [2024-11-20 08:27:43.796042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.929 [2024-11-20 08:27:43.796058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.929 qpair failed and we were unable to recover it. 00:30:29.929 [2024-11-20 08:27:43.805982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.929 [2024-11-20 08:27:43.806039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.929 [2024-11-20 08:27:43.806055] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.929 [2024-11-20 08:27:43.806063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.929 [2024-11-20 08:27:43.806070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.929 [2024-11-20 08:27:43.806085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.929 qpair failed and we were unable to recover it. 00:30:29.929 [2024-11-20 08:27:43.816011] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.930 [2024-11-20 08:27:43.816067] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.930 [2024-11-20 08:27:43.816083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.930 [2024-11-20 08:27:43.816090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.930 [2024-11-20 08:27:43.816097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.930 [2024-11-20 08:27:43.816113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.930 qpair failed and we were unable to recover it. 00:30:29.930 [2024-11-20 08:27:43.825963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.930 [2024-11-20 08:27:43.826017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.930 [2024-11-20 08:27:43.826032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.930 [2024-11-20 08:27:43.826039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.930 [2024-11-20 08:27:43.826047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.930 [2024-11-20 08:27:43.826063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.930 qpair failed and we were unable to recover it. 00:30:29.930 [2024-11-20 08:27:43.836064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.930 [2024-11-20 08:27:43.836121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.930 [2024-11-20 08:27:43.836137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.930 [2024-11-20 08:27:43.836144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.930 [2024-11-20 08:27:43.836152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.930 [2024-11-20 08:27:43.836171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.930 qpair failed and we were unable to recover it. 00:30:29.930 [2024-11-20 08:27:43.846100] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.930 [2024-11-20 08:27:43.846155] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.930 [2024-11-20 08:27:43.846170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.930 [2024-11-20 08:27:43.846178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.930 [2024-11-20 08:27:43.846185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.930 [2024-11-20 08:27:43.846207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.930 qpair failed and we were unable to recover it. 00:30:29.930 [2024-11-20 08:27:43.856141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.930 [2024-11-20 08:27:43.856205] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.930 [2024-11-20 08:27:43.856222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.930 [2024-11-20 08:27:43.856229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.930 [2024-11-20 08:27:43.856236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.930 [2024-11-20 08:27:43.856252] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.930 qpair failed and we were unable to recover it. 00:30:29.930 [2024-11-20 08:27:43.866146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.930 [2024-11-20 08:27:43.866256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.930 [2024-11-20 08:27:43.866272] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.930 [2024-11-20 08:27:43.866281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.930 [2024-11-20 08:27:43.866287] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.930 [2024-11-20 08:27:43.866303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.930 qpair failed and we were unable to recover it. 00:30:29.930 [2024-11-20 08:27:43.876180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.930 [2024-11-20 08:27:43.876237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.930 [2024-11-20 08:27:43.876253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.930 [2024-11-20 08:27:43.876260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.930 [2024-11-20 08:27:43.876266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.930 [2024-11-20 08:27:43.876283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.930 qpair failed and we were unable to recover it. 00:30:29.930 [2024-11-20 08:27:43.886215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.930 [2024-11-20 08:27:43.886287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.930 [2024-11-20 08:27:43.886303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.930 [2024-11-20 08:27:43.886311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.930 [2024-11-20 08:27:43.886317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.930 [2024-11-20 08:27:43.886334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.930 qpair failed and we were unable to recover it. 00:30:29.930 [2024-11-20 08:27:43.896268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.930 [2024-11-20 08:27:43.896322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.930 [2024-11-20 08:27:43.896338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.930 [2024-11-20 08:27:43.896345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.930 [2024-11-20 08:27:43.896353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.930 [2024-11-20 08:27:43.896369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.930 qpair failed and we were unable to recover it. 00:30:29.930 [2024-11-20 08:27:43.906272] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.930 [2024-11-20 08:27:43.906329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.930 [2024-11-20 08:27:43.906344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.930 [2024-11-20 08:27:43.906351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.930 [2024-11-20 08:27:43.906359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.930 [2024-11-20 08:27:43.906375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.930 qpair failed and we were unable to recover it. 00:30:29.930 [2024-11-20 08:27:43.916261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.930 [2024-11-20 08:27:43.916318] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.930 [2024-11-20 08:27:43.916333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.930 [2024-11-20 08:27:43.916341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.930 [2024-11-20 08:27:43.916348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.930 [2024-11-20 08:27:43.916364] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.930 qpair failed and we were unable to recover it. 00:30:29.930 [2024-11-20 08:27:43.926319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.930 [2024-11-20 08:27:43.926377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.930 [2024-11-20 08:27:43.926395] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.930 [2024-11-20 08:27:43.926402] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.930 [2024-11-20 08:27:43.926410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.930 [2024-11-20 08:27:43.926425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.930 qpair failed and we were unable to recover it. 00:30:29.930 [2024-11-20 08:27:43.936344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.930 [2024-11-20 08:27:43.936413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.930 [2024-11-20 08:27:43.936428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.930 [2024-11-20 08:27:43.936435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.930 [2024-11-20 08:27:43.936442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.930 [2024-11-20 08:27:43.936458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.930 qpair failed and we were unable to recover it. 00:30:29.930 [2024-11-20 08:27:43.946314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:29.931 [2024-11-20 08:27:43.946372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:29.931 [2024-11-20 08:27:43.946388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:29.931 [2024-11-20 08:27:43.946397] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:29.931 [2024-11-20 08:27:43.946406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:29.931 [2024-11-20 08:27:43.946424] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:29.931 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 08:27:43.956441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.190 [2024-11-20 08:27:43.956494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.190 [2024-11-20 08:27:43.956510] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.190 [2024-11-20 08:27:43.956517] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.190 [2024-11-20 08:27:43.956524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.190 [2024-11-20 08:27:43.956540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 08:27:43.966503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.190 [2024-11-20 08:27:43.966574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.190 [2024-11-20 08:27:43.966589] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.190 [2024-11-20 08:27:43.966596] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.190 [2024-11-20 08:27:43.966603] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.190 [2024-11-20 08:27:43.966623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 08:27:43.976485] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.190 [2024-11-20 08:27:43.976542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.190 [2024-11-20 08:27:43.976557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.190 [2024-11-20 08:27:43.976565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.190 [2024-11-20 08:27:43.976571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.190 [2024-11-20 08:27:43.976586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.190 qpair failed and we were unable to recover it. 00:30:30.190 [2024-11-20 08:27:43.986547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.191 [2024-11-20 08:27:43.986603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.191 [2024-11-20 08:27:43.986619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.191 [2024-11-20 08:27:43.986626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.191 [2024-11-20 08:27:43.986632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.191 [2024-11-20 08:27:43.986648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 08:27:43.996523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.191 [2024-11-20 08:27:43.996579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.191 [2024-11-20 08:27:43.996595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.191 [2024-11-20 08:27:43.996602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.191 [2024-11-20 08:27:43.996609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.191 [2024-11-20 08:27:43.996623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 08:27:44.006547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.191 [2024-11-20 08:27:44.006607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.191 [2024-11-20 08:27:44.006622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.191 [2024-11-20 08:27:44.006629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.191 [2024-11-20 08:27:44.006637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.191 [2024-11-20 08:27:44.006653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 08:27:44.016574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.191 [2024-11-20 08:27:44.016630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.191 [2024-11-20 08:27:44.016646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.191 [2024-11-20 08:27:44.016654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.191 [2024-11-20 08:27:44.016660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.191 [2024-11-20 08:27:44.016677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 08:27:44.026642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.191 [2024-11-20 08:27:44.026698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.191 [2024-11-20 08:27:44.026713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.191 [2024-11-20 08:27:44.026721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.191 [2024-11-20 08:27:44.026728] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.191 [2024-11-20 08:27:44.026744] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 08:27:44.036558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.191 [2024-11-20 08:27:44.036612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.191 [2024-11-20 08:27:44.036628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.191 [2024-11-20 08:27:44.036635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.191 [2024-11-20 08:27:44.036643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.191 [2024-11-20 08:27:44.036658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 08:27:44.046605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.191 [2024-11-20 08:27:44.046668] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.191 [2024-11-20 08:27:44.046683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.191 [2024-11-20 08:27:44.046691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.191 [2024-11-20 08:27:44.046699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.191 [2024-11-20 08:27:44.046715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 08:27:44.056650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.191 [2024-11-20 08:27:44.056703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.191 [2024-11-20 08:27:44.056724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.191 [2024-11-20 08:27:44.056731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.191 [2024-11-20 08:27:44.056738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.191 [2024-11-20 08:27:44.056753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 08:27:44.066739] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.191 [2024-11-20 08:27:44.066790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.191 [2024-11-20 08:27:44.066805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.191 [2024-11-20 08:27:44.066812] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.191 [2024-11-20 08:27:44.066820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.191 [2024-11-20 08:27:44.066835] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 08:27:44.076746] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.191 [2024-11-20 08:27:44.076802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.191 [2024-11-20 08:27:44.076817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.191 [2024-11-20 08:27:44.076826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.191 [2024-11-20 08:27:44.076833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.191 [2024-11-20 08:27:44.076849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 08:27:44.086785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.191 [2024-11-20 08:27:44.086861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.191 [2024-11-20 08:27:44.086877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.191 [2024-11-20 08:27:44.086884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.191 [2024-11-20 08:27:44.086891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.191 [2024-11-20 08:27:44.086906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 08:27:44.096721] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.191 [2024-11-20 08:27:44.096791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.191 [2024-11-20 08:27:44.096806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.191 [2024-11-20 08:27:44.096813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.191 [2024-11-20 08:27:44.096820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.191 [2024-11-20 08:27:44.096840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 08:27:44.106806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.191 [2024-11-20 08:27:44.106860] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.191 [2024-11-20 08:27:44.106876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.191 [2024-11-20 08:27:44.106883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.191 [2024-11-20 08:27:44.106890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.191 [2024-11-20 08:27:44.106906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.191 qpair failed and we were unable to recover it. 00:30:30.191 [2024-11-20 08:27:44.116790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.191 [2024-11-20 08:27:44.116846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.192 [2024-11-20 08:27:44.116861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.192 [2024-11-20 08:27:44.116868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.192 [2024-11-20 08:27:44.116875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.192 [2024-11-20 08:27:44.116890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 08:27:44.126893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.192 [2024-11-20 08:27:44.126951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.192 [2024-11-20 08:27:44.126967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.192 [2024-11-20 08:27:44.126974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.192 [2024-11-20 08:27:44.126980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.192 [2024-11-20 08:27:44.126996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 08:27:44.136917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.192 [2024-11-20 08:27:44.136973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.192 [2024-11-20 08:27:44.136989] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.192 [2024-11-20 08:27:44.136996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.192 [2024-11-20 08:27:44.137003] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.192 [2024-11-20 08:27:44.137019] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 08:27:44.146881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.192 [2024-11-20 08:27:44.146935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.192 [2024-11-20 08:27:44.146951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.192 [2024-11-20 08:27:44.146958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.192 [2024-11-20 08:27:44.146965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.192 [2024-11-20 08:27:44.146980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 08:27:44.156974] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.192 [2024-11-20 08:27:44.157034] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.192 [2024-11-20 08:27:44.157050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.192 [2024-11-20 08:27:44.157057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.192 [2024-11-20 08:27:44.157064] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.192 [2024-11-20 08:27:44.157079] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 08:27:44.166936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.192 [2024-11-20 08:27:44.166988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.192 [2024-11-20 08:27:44.167004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.192 [2024-11-20 08:27:44.167012] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.192 [2024-11-20 08:27:44.167019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.192 [2024-11-20 08:27:44.167034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 08:27:44.176964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.192 [2024-11-20 08:27:44.177019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.192 [2024-11-20 08:27:44.177034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.192 [2024-11-20 08:27:44.177041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.192 [2024-11-20 08:27:44.177048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.192 [2024-11-20 08:27:44.177064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 08:27:44.187020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.192 [2024-11-20 08:27:44.187105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.192 [2024-11-20 08:27:44.187124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.192 [2024-11-20 08:27:44.187132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.192 [2024-11-20 08:27:44.187138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.192 [2024-11-20 08:27:44.187153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 08:27:44.197109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.192 [2024-11-20 08:27:44.197171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.192 [2024-11-20 08:27:44.197187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.192 [2024-11-20 08:27:44.197194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.192 [2024-11-20 08:27:44.197204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.192 [2024-11-20 08:27:44.197221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.192 [2024-11-20 08:27:44.207085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.192 [2024-11-20 08:27:44.207160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.192 [2024-11-20 08:27:44.207175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.192 [2024-11-20 08:27:44.207182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.192 [2024-11-20 08:27:44.207190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.192 [2024-11-20 08:27:44.207210] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.192 qpair failed and we were unable to recover it. 00:30:30.451 [2024-11-20 08:27:44.217127] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.451 [2024-11-20 08:27:44.217179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.451 [2024-11-20 08:27:44.217195] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.451 [2024-11-20 08:27:44.217208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.451 [2024-11-20 08:27:44.217215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.451 [2024-11-20 08:27:44.217232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-11-20 08:27:44.227214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.451 [2024-11-20 08:27:44.227268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.451 [2024-11-20 08:27:44.227283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.451 [2024-11-20 08:27:44.227291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.451 [2024-11-20 08:27:44.227297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.451 [2024-11-20 08:27:44.227316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-11-20 08:27:44.237184] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.451 [2024-11-20 08:27:44.237237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.451 [2024-11-20 08:27:44.237252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.451 [2024-11-20 08:27:44.237259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.451 [2024-11-20 08:27:44.237265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.451 [2024-11-20 08:27:44.237280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.451 qpair failed and we were unable to recover it. 00:30:30.451 [2024-11-20 08:27:44.247167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.451 [2024-11-20 08:27:44.247222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.452 [2024-11-20 08:27:44.247237] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.452 [2024-11-20 08:27:44.247245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.452 [2024-11-20 08:27:44.247252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.452 [2024-11-20 08:27:44.247268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-11-20 08:27:44.257243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.452 [2024-11-20 08:27:44.257297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.452 [2024-11-20 08:27:44.257313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.452 [2024-11-20 08:27:44.257320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.452 [2024-11-20 08:27:44.257327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.452 [2024-11-20 08:27:44.257342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-11-20 08:27:44.267284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.452 [2024-11-20 08:27:44.267342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.452 [2024-11-20 08:27:44.267357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.452 [2024-11-20 08:27:44.267365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.452 [2024-11-20 08:27:44.267372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.452 [2024-11-20 08:27:44.267387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-11-20 08:27:44.277252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.452 [2024-11-20 08:27:44.277338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.452 [2024-11-20 08:27:44.277353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.452 [2024-11-20 08:27:44.277361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.452 [2024-11-20 08:27:44.277367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.452 [2024-11-20 08:27:44.277383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-11-20 08:27:44.287326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.452 [2024-11-20 08:27:44.287403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.452 [2024-11-20 08:27:44.287419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.452 [2024-11-20 08:27:44.287426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.452 [2024-11-20 08:27:44.287433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.452 [2024-11-20 08:27:44.287449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-11-20 08:27:44.297312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.452 [2024-11-20 08:27:44.297366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.452 [2024-11-20 08:27:44.297380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.452 [2024-11-20 08:27:44.297388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.452 [2024-11-20 08:27:44.297396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.452 [2024-11-20 08:27:44.297412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-11-20 08:27:44.307413] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.452 [2024-11-20 08:27:44.307467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.452 [2024-11-20 08:27:44.307482] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.452 [2024-11-20 08:27:44.307490] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.452 [2024-11-20 08:27:44.307497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.452 [2024-11-20 08:27:44.307513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-11-20 08:27:44.317429] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.452 [2024-11-20 08:27:44.317538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.452 [2024-11-20 08:27:44.317557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.452 [2024-11-20 08:27:44.317565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.452 [2024-11-20 08:27:44.317572] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.452 [2024-11-20 08:27:44.317588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-11-20 08:27:44.327449] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.452 [2024-11-20 08:27:44.327505] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.452 [2024-11-20 08:27:44.327520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.452 [2024-11-20 08:27:44.327527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.452 [2024-11-20 08:27:44.327534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.452 [2024-11-20 08:27:44.327550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-11-20 08:27:44.337416] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.452 [2024-11-20 08:27:44.337473] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.452 [2024-11-20 08:27:44.337488] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.452 [2024-11-20 08:27:44.337496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.452 [2024-11-20 08:27:44.337503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x191fba0 00:30:30.452 [2024-11-20 08:27:44.337520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-11-20 08:27:44.347507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.452 [2024-11-20 08:27:44.347618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.452 [2024-11-20 08:27:44.347676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.452 [2024-11-20 08:27:44.347705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.452 [2024-11-20 08:27:44.347726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc864000b90 00:30:30.452 [2024-11-20 08:27:44.347778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.452 qpair failed and we were unable to recover it. 00:30:30.452 [2024-11-20 08:27:44.357554] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.452 [2024-11-20 08:27:44.357625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.452 [2024-11-20 08:27:44.357656] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.453 [2024-11-20 08:27:44.357671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.453 [2024-11-20 08:27:44.357694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc864000b90 00:30:30.453 [2024-11-20 08:27:44.357727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:30:30.453 qpair failed and we were unable to recover it. 00:30:30.453 [2024-11-20 08:27:44.367612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.453 [2024-11-20 08:27:44.367717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.453 [2024-11-20 08:27:44.367775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.453 [2024-11-20 08:27:44.367801] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.453 [2024-11-20 08:27:44.367824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc868000b90 00:30:30.453 [2024-11-20 08:27:44.367876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.453 qpair failed and we were unable to recover it. 00:30:30.453 [2024-11-20 08:27:44.377627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:30:30.453 [2024-11-20 08:27:44.377708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:30:30.453 [2024-11-20 08:27:44.377738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:30:30.453 [2024-11-20 08:27:44.377753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:30:30.453 [2024-11-20 08:27:44.377769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fc868000b90 00:30:30.453 [2024-11-20 08:27:44.377802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:30:30.453 qpair failed and we were unable to recover it. 00:30:30.453 [2024-11-20 08:27:44.377903] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:30:30.453 A controller has encountered a failure and is being reset. 00:30:30.453 Controller properly reset. 00:30:30.453 Initializing NVMe Controllers 00:30:30.453 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:30.453 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:30.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:30:30.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:30:30.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:30:30.453 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:30:30.453 Initialization complete. Launching workers. 00:30:30.453 Starting thread on core 1 00:30:30.453 Starting thread on core 2 00:30:30.453 Starting thread on core 3 00:30:30.453 Starting thread on core 0 00:30:30.453 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:30:30.453 00:30:30.453 real 0m10.686s 00:30:30.453 user 0m19.620s 00:30:30.453 sys 0m4.666s 00:30:30.453 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:30.453 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:30:30.453 ************************************ 00:30:30.453 END TEST nvmf_target_disconnect_tc2 00:30:30.453 ************************************ 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@99 -- # sync 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # set +e 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:30.712 rmmod nvme_tcp 00:30:30.712 rmmod nvme_fabrics 00:30:30.712 rmmod nvme_keyring 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # set -e 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # return 0 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # '[' -n 1861279 ']' 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@337 -- # killprocess 1861279 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 1861279 ']' 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 1861279 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1861279 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1861279' 00:30:30.712 killing process with pid 1861279 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 1861279 00:30:30.712 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 1861279 00:30:30.971 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:30.971 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # nvmf_fini 00:30:30.971 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@254 -- # local dev 00:30:30.971 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@257 -- # remove_target_ns 00:30:30.971 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:30.971 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:30.971 08:27:44 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@258 -- # delete_main_bridge 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@121 -- # return 0 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@41 -- # _dev=0 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@41 -- # dev_map=() 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@274 -- # iptr 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@548 -- # iptables-save 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@548 -- # iptables-restore 00:30:32.878 00:30:32.878 real 0m19.612s 00:30:32.878 user 0m47.062s 00:30:32.878 sys 0m9.639s 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:32.878 08:27:46 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:30:32.878 ************************************ 00:30:32.878 END TEST nvmf_target_disconnect 00:30:32.878 ************************************ 00:30:33.138 08:27:46 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:33.138 00:30:33.138 real 5m56.715s 00:30:33.138 user 10m38.383s 00:30:33.138 sys 1m59.942s 00:30:33.138 08:27:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:33.138 08:27:46 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:33.138 ************************************ 00:30:33.138 END TEST nvmf_host 00:30:33.138 ************************************ 00:30:33.138 08:27:46 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:30:33.138 08:27:46 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:30:33.138 08:27:46 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:33.138 08:27:46 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:33.138 08:27:46 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:33.138 08:27:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:33.138 ************************************ 00:30:33.138 START TEST nvmf_target_core_interrupt_mode 00:30:33.138 ************************************ 00:30:33.138 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:30:33.138 * Looking for test storage... 00:30:33.138 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:30:33.138 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:33.138 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:30:33.138 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:33.398 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:33.398 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:33.398 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:33.398 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:33.398 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:30:33.398 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:30:33.398 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:30:33.398 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:30:33.398 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:33.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.399 --rc genhtml_branch_coverage=1 00:30:33.399 --rc genhtml_function_coverage=1 00:30:33.399 --rc genhtml_legend=1 00:30:33.399 --rc geninfo_all_blocks=1 00:30:33.399 --rc geninfo_unexecuted_blocks=1 00:30:33.399 00:30:33.399 ' 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:33.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.399 --rc genhtml_branch_coverage=1 00:30:33.399 --rc genhtml_function_coverage=1 00:30:33.399 --rc genhtml_legend=1 00:30:33.399 --rc geninfo_all_blocks=1 00:30:33.399 --rc geninfo_unexecuted_blocks=1 00:30:33.399 00:30:33.399 ' 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:33.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.399 --rc genhtml_branch_coverage=1 00:30:33.399 --rc genhtml_function_coverage=1 00:30:33.399 --rc genhtml_legend=1 00:30:33.399 --rc geninfo_all_blocks=1 00:30:33.399 --rc geninfo_unexecuted_blocks=1 00:30:33.399 00:30:33.399 ' 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:33.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.399 --rc genhtml_branch_coverage=1 00:30:33.399 --rc genhtml_function_coverage=1 00:30:33.399 --rc genhtml_legend=1 00:30:33.399 --rc geninfo_all_blocks=1 00:30:33.399 --rc geninfo_unexecuted_blocks=1 00:30:33.399 00:30:33.399 ' 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@50 -- # : 0 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:33.399 ************************************ 00:30:33.399 START TEST nvmf_abort 00:30:33.399 ************************************ 00:30:33.399 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:30:33.399 * Looking for test storage... 00:30:33.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:33.400 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:33.400 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:30:33.400 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:33.400 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:33.400 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:33.400 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:33.400 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:33.400 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:30:33.400 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:30:33.400 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:30:33.400 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:30:33.400 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:30:33.400 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:30:33.400 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:30:33.400 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:33.400 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:30:33.400 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:30:33.400 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:33.400 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:33.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.660 --rc genhtml_branch_coverage=1 00:30:33.660 --rc genhtml_function_coverage=1 00:30:33.660 --rc genhtml_legend=1 00:30:33.660 --rc geninfo_all_blocks=1 00:30:33.660 --rc geninfo_unexecuted_blocks=1 00:30:33.660 00:30:33.660 ' 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:33.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.660 --rc genhtml_branch_coverage=1 00:30:33.660 --rc genhtml_function_coverage=1 00:30:33.660 --rc genhtml_legend=1 00:30:33.660 --rc geninfo_all_blocks=1 00:30:33.660 --rc geninfo_unexecuted_blocks=1 00:30:33.660 00:30:33.660 ' 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:33.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.660 --rc genhtml_branch_coverage=1 00:30:33.660 --rc genhtml_function_coverage=1 00:30:33.660 --rc genhtml_legend=1 00:30:33.660 --rc geninfo_all_blocks=1 00:30:33.660 --rc geninfo_unexecuted_blocks=1 00:30:33.660 00:30:33.660 ' 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:33.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:33.660 --rc genhtml_branch_coverage=1 00:30:33.660 --rc genhtml_function_coverage=1 00:30:33.660 --rc genhtml_legend=1 00:30:33.660 --rc geninfo_all_blocks=1 00:30:33.660 --rc geninfo_unexecuted_blocks=1 00:30:33.660 00:30:33.660 ' 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:33.660 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # xtrace_disable 00:30:33.661 08:27:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.237 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:40.237 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@131 -- # pci_devs=() 00:30:40.237 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@135 -- # net_devs=() 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@136 -- # e810=() 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@136 -- # local -ga e810 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@137 -- # x722=() 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@137 -- # local -ga x722 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@138 -- # mlx=() 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@138 -- # local -ga mlx 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:40.238 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:40.238 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:40.238 Found net devices under 0000:86:00.0: cvl_0_0 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:40.238 Found net devices under 0000:86:00.1: cvl_0_1 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # is_hw=yes 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@247 -- # create_target_ns 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:30:40.238 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:40.239 10.0.0.1 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:40.239 10.0.0.2 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:40.239 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:40.239 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.425 ms 00:30:40.239 00:30:40.239 --- 10.0.0.1 ping statistics --- 00:30:40.239 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.239 rtt min/avg/max/mdev = 0.425/0.425/0.425/0.000 ms 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:30:40.239 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:30:40.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:40.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:30:40.240 00:30:40.240 --- 10.0.0.2 ping statistics --- 00:30:40.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:40.240 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair++ )) 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@270 -- # return 0 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=initiator1 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # return 1 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev= 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@160 -- # return 0 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target0 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # get_net_dev target1 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # local dev=target1 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # return 1 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@159 -- # dev= 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@160 -- # return 0 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:30:40.240 ' 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=1866052 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 1866052 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 1866052 ']' 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:40.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:40.240 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.240 [2024-11-20 08:27:53.535353] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:40.240 [2024-11-20 08:27:53.536304] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:30:40.240 [2024-11-20 08:27:53.536343] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:40.240 [2024-11-20 08:27:53.613688] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:40.240 [2024-11-20 08:27:53.655844] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:40.240 [2024-11-20 08:27:53.655880] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:40.241 [2024-11-20 08:27:53.655888] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:40.241 [2024-11-20 08:27:53.655894] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:40.241 [2024-11-20 08:27:53.655899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:40.241 [2024-11-20 08:27:53.657362] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:40.241 [2024-11-20 08:27:53.657470] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:40.241 [2024-11-20 08:27:53.657471] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:40.241 [2024-11-20 08:27:53.723971] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:40.241 [2024-11-20 08:27:53.724740] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:40.241 [2024-11-20 08:27:53.725002] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:40.241 [2024-11-20 08:27:53.725149] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.241 [2024-11-20 08:27:53.786279] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.241 Malloc0 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.241 Delay0 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.241 [2024-11-20 08:27:53.882294] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:40.241 08:27:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:30:40.241 [2024-11-20 08:27:54.015357] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:30:42.158 Initializing NVMe Controllers 00:30:42.158 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:30:42.158 controller IO queue size 128 less than required 00:30:42.158 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:30:42.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:30:42.158 Initialization complete. Launching workers. 00:30:42.158 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38205 00:30:42.158 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38262, failed to submit 66 00:30:42.158 success 38205, unsuccessful 57, failed 0 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:42.158 rmmod nvme_tcp 00:30:42.158 rmmod nvme_fabrics 00:30:42.158 rmmod nvme_keyring 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 1866052 ']' 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 1866052 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 1866052 ']' 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 1866052 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:42.158 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1866052 00:30:42.417 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:42.417 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:42.417 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1866052' 00:30:42.417 killing process with pid 1866052 00:30:42.417 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 1866052 00:30:42.417 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 1866052 00:30:42.417 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:42.417 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:30:42.417 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@254 -- # local dev 00:30:42.417 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@257 -- # remove_target_ns 00:30:42.417 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:42.417 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:42.417 08:27:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:44.954 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@258 -- # delete_main_bridge 00:30:44.954 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@121 -- # return 0 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@274 -- # iptr 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@548 -- # iptables-save 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@548 -- # iptables-restore 00:30:44.955 00:30:44.955 real 0m11.218s 00:30:44.955 user 0m10.443s 00:30:44.955 sys 0m5.674s 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:30:44.955 ************************************ 00:30:44.955 END TEST nvmf_abort 00:30:44.955 ************************************ 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:44.955 ************************************ 00:30:44.955 START TEST nvmf_ns_hotplug_stress 00:30:44.955 ************************************ 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:30:44.955 * Looking for test storage... 00:30:44.955 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:44.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.955 --rc genhtml_branch_coverage=1 00:30:44.955 --rc genhtml_function_coverage=1 00:30:44.955 --rc genhtml_legend=1 00:30:44.955 --rc geninfo_all_blocks=1 00:30:44.955 --rc geninfo_unexecuted_blocks=1 00:30:44.955 00:30:44.955 ' 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:44.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.955 --rc genhtml_branch_coverage=1 00:30:44.955 --rc genhtml_function_coverage=1 00:30:44.955 --rc genhtml_legend=1 00:30:44.955 --rc geninfo_all_blocks=1 00:30:44.955 --rc geninfo_unexecuted_blocks=1 00:30:44.955 00:30:44.955 ' 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:44.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.955 --rc genhtml_branch_coverage=1 00:30:44.955 --rc genhtml_function_coverage=1 00:30:44.955 --rc genhtml_legend=1 00:30:44.955 --rc geninfo_all_blocks=1 00:30:44.955 --rc geninfo_unexecuted_blocks=1 00:30:44.955 00:30:44.955 ' 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:44.955 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:44.955 --rc genhtml_branch_coverage=1 00:30:44.955 --rc genhtml_function_coverage=1 00:30:44.955 --rc genhtml_legend=1 00:30:44.955 --rc geninfo_all_blocks=1 00:30:44.955 --rc geninfo_unexecuted_blocks=1 00:30:44.955 00:30:44.955 ' 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:44.955 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:30:44.956 08:27:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # net_devs=() 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # e810=() 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # local -ga e810 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # x722=() 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # local -ga x722 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # mlx=() 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:51.532 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:51.532 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:51.532 Found net devices under 0000:86:00.0: cvl_0_0 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:51.532 Found net devices under 0000:86:00.1: cvl_0_1 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:51.532 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@247 -- # create_target_ns 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:51.533 10.0.0.1 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:51.533 10.0.0.2 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:51.533 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:51.534 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:51.534 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.481 ms 00:30:51.534 00:30:51.534 --- 10.0.0.1 ping statistics --- 00:30:51.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.534 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:30:51.534 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:51.534 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:30:51.534 00:30:51.534 --- 10.0.0.2 ping statistics --- 00:30:51.534 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:51.534 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair++ )) 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # return 0 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator0 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=initiator1 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # return 1 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev= 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@160 -- # return 0 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target0 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target0 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:30:51.534 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # get_net_dev target1 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # local dev=target1 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # return 1 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@159 -- # dev= 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@160 -- # return 0 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:30:51.535 ' 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=1870065 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 1870065 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 1870065 ']' 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:51.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:51.535 08:28:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:51.535 [2024-11-20 08:28:04.858974] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:51.535 [2024-11-20 08:28:04.859940] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:30:51.535 [2024-11-20 08:28:04.859980] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:51.535 [2024-11-20 08:28:04.938258] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:51.535 [2024-11-20 08:28:04.978112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:51.535 [2024-11-20 08:28:04.978150] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:51.535 [2024-11-20 08:28:04.978158] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:51.535 [2024-11-20 08:28:04.978164] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:51.535 [2024-11-20 08:28:04.978169] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:51.535 [2024-11-20 08:28:04.979622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:51.535 [2024-11-20 08:28:04.979731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:51.535 [2024-11-20 08:28:04.979730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:51.535 [2024-11-20 08:28:05.046123] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:51.535 [2024-11-20 08:28:05.046955] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:51.535 [2024-11-20 08:28:05.047344] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:51.535 [2024-11-20 08:28:05.047445] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:51.535 08:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:51.535 08:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:30:51.535 08:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:51.535 08:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:51.535 08:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:30:51.535 08:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:51.535 08:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:30:51.535 08:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:51.535 [2024-11-20 08:28:05.292570] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:51.535 08:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:30:51.535 08:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:51.794 [2024-11-20 08:28:05.681106] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:51.794 08:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:52.053 08:28:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:30:52.312 Malloc0 00:30:52.312 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:30:52.312 Delay0 00:30:52.312 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:52.570 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:30:52.829 NULL1 00:30:52.829 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:30:53.087 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=1870331 00:30:53.087 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:30:53.087 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:30:53.087 08:28:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:54.023 Read completed with error (sct=0, sc=11) 00:30:54.023 08:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:54.023 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.281 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:54.281 08:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:30:54.281 08:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:30:54.540 true 00:30:54.540 08:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:30:54.540 08:28:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.475 08:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:55.475 08:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:30:55.475 08:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:30:55.734 true 00:30:55.734 08:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:30:55.734 08:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:55.993 08:28:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:56.251 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:30:56.251 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:30:56.251 true 00:30:56.251 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:30:56.251 08:28:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:57.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.627 08:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:57.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.627 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:57.627 08:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:30:57.627 08:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:30:57.886 true 00:30:57.886 08:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:30:57.886 08:28:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:58.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.823 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:58.823 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:30:58.823 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:30:58.823 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:30:59.082 true 00:30:59.082 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:30:59.082 08:28:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:30:59.082 08:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:30:59.340 08:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:30:59.340 08:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:30:59.598 true 00:30:59.598 08:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:30:59.598 08:28:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:00.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.975 08:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:00.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.975 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:00.975 08:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:31:00.975 08:28:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:31:01.236 true 00:31:01.236 08:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:01.236 08:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:01.867 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:01.867 08:28:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.126 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:02.126 08:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:31:02.126 08:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:31:02.384 true 00:31:02.384 08:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:02.384 08:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:02.643 08:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:02.901 08:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:31:02.901 08:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:31:02.901 true 00:31:02.901 08:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:02.901 08:28:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:04.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:04.277 08:28:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:04.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:04.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:04.277 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:04.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:04.278 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:04.278 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:31:04.278 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:31:04.536 true 00:31:04.536 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:04.536 08:28:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.472 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:05.472 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:31:05.472 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:31:05.731 true 00:31:05.731 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:05.731 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:05.989 08:28:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:06.247 08:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:31:06.247 08:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:31:06.247 true 00:31:06.247 08:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:06.247 08:28:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:07.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.623 08:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:07.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.623 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:07.623 08:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:31:07.623 08:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:31:07.883 true 00:31:07.883 08:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:07.883 08:28:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:08.820 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:08.820 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:08.820 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:31:08.820 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:31:09.079 true 00:31:09.079 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:09.079 08:28:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:09.338 08:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:09.597 08:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:31:09.597 08:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:31:09.597 true 00:31:09.597 08:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:09.597 08:28:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:10.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:10.974 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:10.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:10.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:10.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:10.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:10.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:10.974 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:10.974 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:31:10.975 08:28:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:31:11.233 true 00:31:11.233 08:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:11.233 08:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.176 08:28:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.176 08:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:31:12.176 08:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:31:12.435 true 00:31:12.435 08:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:12.435 08:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:12.693 08:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:12.952 08:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:31:12.952 08:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:31:12.952 true 00:31:12.952 08:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:12.952 08:28:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:14.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.330 08:28:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:14.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.330 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:14.331 08:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:31:14.331 08:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:31:14.589 true 00:31:14.589 08:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:14.589 08:28:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.526 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:15.526 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:31:15.526 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:31:15.785 true 00:31:15.785 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:15.785 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:15.785 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:16.044 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:31:16.044 08:28:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:31:16.302 true 00:31:16.303 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:16.303 08:28:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:17.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.239 08:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:17.239 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.497 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:17.497 08:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:31:17.497 08:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:31:17.757 true 00:31:17.757 08:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:17.757 08:28:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:18.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:18.695 08:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:18.695 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:18.695 08:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:31:18.695 08:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:31:18.954 true 00:31:18.954 08:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:18.954 08:28:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:19.213 08:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:19.472 08:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:31:19.472 08:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:31:19.472 true 00:31:19.472 08:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:19.472 08:28:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:20.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.850 08:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:20.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.850 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:20.850 08:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:31:20.850 08:28:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:31:21.109 true 00:31:21.109 08:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:21.109 08:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:22.047 08:28:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:22.047 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:31:22.047 08:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:31:22.047 08:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:31:22.305 true 00:31:22.305 08:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:22.305 08:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:22.564 08:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:22.823 08:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:31:22.823 08:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:31:22.823 true 00:31:23.082 08:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:23.082 08:28:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.018 Initializing NVMe Controllers 00:31:24.018 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:24.018 Controller IO queue size 128, less than required. 00:31:24.018 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:24.018 Controller IO queue size 128, less than required. 00:31:24.018 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:24.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:24.018 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:31:24.018 Initialization complete. Launching workers. 00:31:24.018 ======================================================== 00:31:24.018 Latency(us) 00:31:24.018 Device Information : IOPS MiB/s Average min max 00:31:24.018 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2284.11 1.12 40985.99 2719.61 1012821.08 00:31:24.018 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18612.77 9.09 6876.95 1570.11 370176.42 00:31:24.018 ======================================================== 00:31:24.018 Total : 20896.88 10.20 10605.19 1570.11 1012821.08 00:31:24.018 00:31:24.018 08:28:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:24.278 08:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:31:24.278 08:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:31:24.537 true 00:31:24.537 08:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 1870331 00:31:24.537 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (1870331) - No such process 00:31:24.537 08:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 1870331 00:31:24.537 08:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:24.800 08:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:24.800 08:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:31:24.800 08:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:31:24.800 08:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:31:24.800 08:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:24.800 08:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:31:25.059 null0 00:31:25.059 08:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:25.059 08:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:25.059 08:28:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:31:25.318 null1 00:31:25.318 08:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:25.318 08:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:25.318 08:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:31:25.318 null2 00:31:25.318 08:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:25.318 08:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:25.318 08:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:31:25.578 null3 00:31:25.578 08:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:25.578 08:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:25.578 08:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:31:25.837 null4 00:31:25.837 08:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:25.837 08:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:25.837 08:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:31:26.096 null5 00:31:26.096 08:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:26.096 08:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:26.096 08:28:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:31:26.096 null6 00:31:26.096 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:26.096 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:26.096 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:31:26.356 null7 00:31:26.356 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:31:26.356 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:31:26.356 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:31:26.356 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.356 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.356 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:31:26.356 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.356 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.356 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:31:26.356 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.356 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.356 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:26.356 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.356 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 1875780 1875782 1875785 1875788 1875792 1875794 1875796 1875799 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.357 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:26.617 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:26.617 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.617 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:26.617 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:26.617 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:26.617 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:26.617 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:26.617 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:26.876 08:28:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.136 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:27.396 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:27.396 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.396 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:27.396 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:27.396 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:27.396 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:27.396 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:27.396 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.655 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:27.914 08:28:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:28.174 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:28.174 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:28.174 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:28.174 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:28.174 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:28.174 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.174 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:28.174 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.433 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:28.692 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:28.692 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:28.692 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:28.692 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:28.692 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:28.692 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:28.692 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:28.692 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:28.951 08:28:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.210 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.210 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.210 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:29.210 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.210 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.210 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:29.210 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.210 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.210 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:29.210 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.210 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.211 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.211 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:29.211 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.211 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:29.211 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.211 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.211 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:29.211 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.211 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.211 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:29.211 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.211 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.211 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:29.469 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:29.469 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:29.469 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.470 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:29.470 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:29.470 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:29.470 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:29.470 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.729 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:31:29.989 08:28:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:31:29.989 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.990 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.990 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.990 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.990 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:31:29.990 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:31:29.990 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:29.990 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:29.990 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:31:30.249 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:31:30.249 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:31:30.249 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:31:30.249 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:31:30.249 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:31:30.249 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:31:30.249 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:30.249 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:31:30.507 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.507 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.507 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.507 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.507 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.507 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.507 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.507 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.507 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.507 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.507 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.507 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.507 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.507 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.508 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:31:30.508 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:31:30.508 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:31:30.508 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:31:30.508 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:31:30.508 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:31:30.508 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:31:30.508 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:31:30.508 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:31:30.508 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:31:30.508 rmmod nvme_tcp 00:31:30.508 rmmod nvme_fabrics 00:31:30.508 rmmod nvme_keyring 00:31:30.508 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:31:30.508 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:31:30.508 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:31:30.508 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 1870065 ']' 00:31:30.508 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 1870065 00:31:30.508 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 1870065 ']' 00:31:30.508 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 1870065 00:31:30.508 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:31:30.508 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:30.508 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1870065 00:31:30.767 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:30.767 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:30.767 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1870065' 00:31:30.767 killing process with pid 1870065 00:31:30.767 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 1870065 00:31:30.767 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 1870065 00:31:30.767 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:31:30.767 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:31:30.767 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@254 -- # local dev 00:31:30.767 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@257 -- # remove_target_ns 00:31:30.767 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:30.767 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:30.767 08:28:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@258 -- # delete_main_bridge 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@121 -- # return 0 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@274 -- # iptr 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-save 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@548 -- # iptables-restore 00:31:33.305 00:31:33.305 real 0m48.246s 00:31:33.305 user 2m59.768s 00:31:33.305 sys 0m20.232s 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:31:33.305 ************************************ 00:31:33.305 END TEST nvmf_ns_hotplug_stress 00:31:33.305 ************************************ 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:33.305 ************************************ 00:31:33.305 START TEST nvmf_delete_subsystem 00:31:33.305 ************************************ 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:31:33.305 * Looking for test storage... 00:31:33.305 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:31:33.305 08:28:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:33.305 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.305 --rc genhtml_branch_coverage=1 00:31:33.305 --rc genhtml_function_coverage=1 00:31:33.305 --rc genhtml_legend=1 00:31:33.305 --rc geninfo_all_blocks=1 00:31:33.305 --rc geninfo_unexecuted_blocks=1 00:31:33.305 00:31:33.305 ' 00:31:33.305 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:33.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.306 --rc genhtml_branch_coverage=1 00:31:33.306 --rc genhtml_function_coverage=1 00:31:33.306 --rc genhtml_legend=1 00:31:33.306 --rc geninfo_all_blocks=1 00:31:33.306 --rc geninfo_unexecuted_blocks=1 00:31:33.306 00:31:33.306 ' 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:33.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.306 --rc genhtml_branch_coverage=1 00:31:33.306 --rc genhtml_function_coverage=1 00:31:33.306 --rc genhtml_legend=1 00:31:33.306 --rc geninfo_all_blocks=1 00:31:33.306 --rc geninfo_unexecuted_blocks=1 00:31:33.306 00:31:33.306 ' 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:33.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:33.306 --rc genhtml_branch_coverage=1 00:31:33.306 --rc genhtml_function_coverage=1 00:31:33.306 --rc genhtml_legend=1 00:31:33.306 --rc geninfo_all_blocks=1 00:31:33.306 --rc geninfo_unexecuted_blocks=1 00:31:33.306 00:31:33.306 ' 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # xtrace_disable 00:31:33.306 08:28:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # pci_devs=() 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # net_devs=() 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # e810=() 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # local -ga e810 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # x722=() 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # local -ga x722 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # mlx=() 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # local -ga mlx 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:39.873 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:39.873 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:39.874 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:39.874 Found net devices under 0000:86:00.0: cvl_0_0 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:39.874 Found net devices under 0000:86:00.1: cvl_0_1 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # is_hw=yes 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@247 -- # create_target_ns 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:31:39.874 10.0.0.1 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:31:39.874 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:31:39.875 10.0.0.2 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:39.875 08:28:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:31:39.875 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:39.875 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.469 ms 00:31:39.875 00:31:39.875 --- 10.0.0.1 ping statistics --- 00:31:39.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.875 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:31:39.875 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:39.875 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:31:39.875 00:31:39.875 --- 10.0.0.2 ping statistics --- 00:31:39.875 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:39.875 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair++ )) 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # return 0 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:39.875 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=initiator1 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # return 1 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev= 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@160 -- # return 0 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target0 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # get_net_dev target1 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # local dev=target1 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # return 1 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@159 -- # dev= 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@160 -- # return 0 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:31:39.876 ' 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=1880098 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 1880098 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 1880098 ']' 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:39.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.876 [2024-11-20 08:28:53.191903] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:39.876 [2024-11-20 08:28:53.192856] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:31:39.876 [2024-11-20 08:28:53.192894] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:39.876 [2024-11-20 08:28:53.253367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:39.876 [2024-11-20 08:28:53.295465] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:39.876 [2024-11-20 08:28:53.295501] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:39.876 [2024-11-20 08:28:53.295508] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:39.876 [2024-11-20 08:28:53.295514] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:39.876 [2024-11-20 08:28:53.295519] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:39.876 [2024-11-20 08:28:53.296675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:39.876 [2024-11-20 08:28:53.296676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:39.876 [2024-11-20 08:28:53.364055] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:39.876 [2024-11-20 08:28:53.364629] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:39.876 [2024-11-20 08:28:53.364832] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.876 [2024-11-20 08:28:53.429538] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:39.876 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.877 [2024-11-20 08:28:53.457826] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.877 NULL1 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.877 Delay0 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=1880306 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:31:39.877 08:28:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:39.877 [2024-11-20 08:28:53.568669] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:41.781 08:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:41.781 08:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:41.781 08:28:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 starting I/O failed: -6 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 starting I/O failed: -6 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 starting I/O failed: -6 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 starting I/O failed: -6 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 starting I/O failed: -6 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 starting I/O failed: -6 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 starting I/O failed: -6 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 starting I/O failed: -6 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 starting I/O failed: -6 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 starting I/O failed: -6 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 starting I/O failed: -6 00:31:41.781 [2024-11-20 08:28:55.766576] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af8680 is same with the state(6) to be set 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 Write completed with error (sct=0, sc=8) 00:31:41.781 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 Write completed with error (sct=0, sc=8) 00:31:41.782 Read completed with error (sct=0, sc=8) 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:41.782 starting I/O failed: -6 00:31:43.160 [2024-11-20 08:28:56.746129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af99a0 is same with the state(6) to be set 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 [2024-11-20 08:28:56.769871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af84a0 is same with the state(6) to be set 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 [2024-11-20 08:28:56.770218] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1af8860 is same with the state(6) to be set 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 [2024-11-20 08:28:56.773508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1a8000d020 is same with the state(6) to be set 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Read completed with error (sct=0, sc=8) 00:31:43.161 Write completed with error (sct=0, sc=8) 00:31:43.161 [2024-11-20 08:28:56.774090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1a8000d7e0 is same with the state(6) to be set 00:31:43.161 Initializing NVMe Controllers 00:31:43.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:43.161 Controller IO queue size 128, less than required. 00:31:43.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:43.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:43.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:43.161 Initialization complete. Launching workers. 00:31:43.161 ======================================================== 00:31:43.161 Latency(us) 00:31:43.161 Device Information : IOPS MiB/s Average min max 00:31:43.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 161.25 0.08 913833.94 249.29 1006161.66 00:31:43.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 173.19 0.08 949671.15 285.44 1009839.86 00:31:43.161 ======================================================== 00:31:43.161 Total : 334.44 0.16 932392.49 249.29 1009839.86 00:31:43.161 00:31:43.161 [2024-11-20 08:28:56.774691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1af99a0 (9): Bad file descriptor 00:31:43.161 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:31:43.161 08:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.161 08:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:31:43.161 08:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1880306 00:31:43.161 08:28:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 1880306 00:31:43.421 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (1880306) - No such process 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 1880306 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 1880306 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 1880306 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:43.421 [2024-11-20 08:28:57.309789] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=1880788 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1880788 00:31:43.421 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:43.421 [2024-11-20 08:28:57.396739] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:31:43.988 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:43.988 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1880788 00:31:43.988 08:28:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:44.555 08:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:44.555 08:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1880788 00:31:44.555 08:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:44.814 08:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:45.072 08:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1880788 00:31:45.072 08:28:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:45.339 08:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:45.339 08:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1880788 00:31:45.339 08:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:45.911 08:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:45.911 08:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1880788 00:31:45.911 08:28:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:46.478 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:46.478 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1880788 00:31:46.478 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:31:46.737 Initializing NVMe Controllers 00:31:46.737 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:46.737 Controller IO queue size 128, less than required. 00:31:46.737 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:46.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:31:46.737 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:31:46.737 Initialization complete. Launching workers. 00:31:46.737 ======================================================== 00:31:46.737 Latency(us) 00:31:46.737 Device Information : IOPS MiB/s Average min max 00:31:46.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003242.99 1000144.93 1042335.68 00:31:46.737 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004093.64 1000305.69 1010976.73 00:31:46.737 ======================================================== 00:31:46.737 Total : 256.00 0.12 1003668.32 1000144.93 1042335.68 00:31:46.737 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 1880788 00:31:46.996 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (1880788) - No such process 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 1880788 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:31:46.996 rmmod nvme_tcp 00:31:46.996 rmmod nvme_fabrics 00:31:46.996 rmmod nvme_keyring 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 1880098 ']' 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 1880098 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 1880098 ']' 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 1880098 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1880098 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1880098' 00:31:46.996 killing process with pid 1880098 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 1880098 00:31:46.996 08:29:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 1880098 00:31:47.255 08:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:31:47.255 08:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:31:47.255 08:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@254 -- # local dev 00:31:47.255 08:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@257 -- # remove_target_ns 00:31:47.255 08:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:47.255 08:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:47.255 08:29:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@258 -- # delete_main_bridge 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@121 -- # return 0 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@274 -- # iptr 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-save 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@548 -- # iptables-restore 00:31:49.796 00:31:49.796 real 0m16.342s 00:31:49.796 user 0m26.496s 00:31:49.796 sys 0m6.183s 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:31:49.796 ************************************ 00:31:49.796 END TEST nvmf_delete_subsystem 00:31:49.796 ************************************ 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:49.796 ************************************ 00:31:49.796 START TEST nvmf_host_management 00:31:49.796 ************************************ 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:31:49.796 * Looking for test storage... 00:31:49.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:49.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.796 --rc genhtml_branch_coverage=1 00:31:49.796 --rc genhtml_function_coverage=1 00:31:49.796 --rc genhtml_legend=1 00:31:49.796 --rc geninfo_all_blocks=1 00:31:49.796 --rc geninfo_unexecuted_blocks=1 00:31:49.796 00:31:49.796 ' 00:31:49.796 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:49.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.797 --rc genhtml_branch_coverage=1 00:31:49.797 --rc genhtml_function_coverage=1 00:31:49.797 --rc genhtml_legend=1 00:31:49.797 --rc geninfo_all_blocks=1 00:31:49.797 --rc geninfo_unexecuted_blocks=1 00:31:49.797 00:31:49.797 ' 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:49.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.797 --rc genhtml_branch_coverage=1 00:31:49.797 --rc genhtml_function_coverage=1 00:31:49.797 --rc genhtml_legend=1 00:31:49.797 --rc geninfo_all_blocks=1 00:31:49.797 --rc geninfo_unexecuted_blocks=1 00:31:49.797 00:31:49.797 ' 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:49.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:49.797 --rc genhtml_branch_coverage=1 00:31:49.797 --rc genhtml_function_coverage=1 00:31:49.797 --rc genhtml_legend=1 00:31:49.797 --rc geninfo_all_blocks=1 00:31:49.797 --rc geninfo_unexecuted_blocks=1 00:31:49.797 00:31:49.797 ' 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # xtrace_disable 00:31:49.797 08:29:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:55.120 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:55.409 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@131 -- # pci_devs=() 00:31:55.409 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@131 -- # local -a pci_devs 00:31:55.409 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@132 -- # pci_net_devs=() 00:31:55.409 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:31:55.409 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@133 -- # pci_drivers=() 00:31:55.409 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@133 -- # local -A pci_drivers 00:31:55.409 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@135 -- # net_devs=() 00:31:55.409 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@135 -- # local -ga net_devs 00:31:55.409 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@136 -- # e810=() 00:31:55.409 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@136 -- # local -ga e810 00:31:55.409 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@137 -- # x722=() 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@137 -- # local -ga x722 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@138 -- # mlx=() 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@138 -- # local -ga mlx 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:55.410 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:55.410 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:55.410 Found net devices under 0000:86:00.0: cvl_0_0 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:55.410 Found net devices under 0000:86:00.1: cvl_0_1 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # is_hw=yes 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@247 -- # create_target_ns 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:31:55.410 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:31:55.411 10.0.0.1 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:31:55.411 10.0.0.2 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 1 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:55.411 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:55.412 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:55.412 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:31:55.412 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:31:55.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:55.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.427 ms 00:31:55.412 00:31:55.412 --- 10.0.0.1 ping statistics --- 00:31:55.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.412 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:31:55.412 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:31:55.412 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:55.412 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:55.412 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:55.412 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:55.412 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:55.412 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:31:55.412 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:55.412 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:31:55.412 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:31:55.412 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:31:55.412 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:55.412 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:31:55.684 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:55.684 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:31:55.684 00:31:55.684 --- 10.0.0.2 ping statistics --- 00:31:55.684 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:55.684 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair++ )) 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@270 -- # return 0 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator0 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=initiator1 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # return 1 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev= 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@160 -- # return 0 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target0 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target0 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:55.684 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # get_net_dev target1 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # local dev=target1 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # return 1 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@159 -- # dev= 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@160 -- # return 0 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:31:55.685 ' 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=1885022 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 1885022 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1885022 ']' 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:55.685 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:55.685 08:29:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:55.685 [2024-11-20 08:29:09.587764] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:55.685 [2024-11-20 08:29:09.588690] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:31:55.685 [2024-11-20 08:29:09.588723] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:55.685 [2024-11-20 08:29:09.668530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:55.944 [2024-11-20 08:29:09.712609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:55.944 [2024-11-20 08:29:09.712646] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:55.944 [2024-11-20 08:29:09.712654] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:55.944 [2024-11-20 08:29:09.712660] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:55.944 [2024-11-20 08:29:09.712665] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:55.944 [2024-11-20 08:29:09.714272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:55.944 [2024-11-20 08:29:09.714383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:55.944 [2024-11-20 08:29:09.714492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:55.944 [2024-11-20 08:29:09.714492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:55.944 [2024-11-20 08:29:09.782021] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:55.944 [2024-11-20 08:29:09.782755] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:55.944 [2024-11-20 08:29:09.782959] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:55.944 [2024-11-20 08:29:09.783383] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:55.944 [2024-11-20 08:29:09.783434] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:56.514 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.514 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:56.514 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:56.514 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:56.514 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:56.514 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:56.514 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:56.514 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.514 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:56.514 [2024-11-20 08:29:10.475392] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.514 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.514 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:31:56.514 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:56.514 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:56.514 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:31:56.514 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:31:56.514 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:31:56.514 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:56.514 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:56.773 Malloc0 00:31:56.773 [2024-11-20 08:29:10.567453] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:56.773 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:56.773 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:31:56.773 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:56.773 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:56.773 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=1885109 00:31:56.773 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 1885109 /var/tmp/bdevperf.sock 00:31:56.773 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 1885109 ']' 00:31:56.773 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:56.773 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:31:56.773 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:31:56.773 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:56.773 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:56.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:56.773 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:31:56.774 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:56.774 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:31:56.774 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:56.774 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:31:56.774 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:31:56.774 { 00:31:56.774 "params": { 00:31:56.774 "name": "Nvme$subsystem", 00:31:56.774 "trtype": "$TEST_TRANSPORT", 00:31:56.774 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:56.774 "adrfam": "ipv4", 00:31:56.774 "trsvcid": "$NVMF_PORT", 00:31:56.774 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:56.774 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:56.774 "hdgst": ${hdgst:-false}, 00:31:56.774 "ddgst": ${ddgst:-false} 00:31:56.774 }, 00:31:56.774 "method": "bdev_nvme_attach_controller" 00:31:56.774 } 00:31:56.774 EOF 00:31:56.774 )") 00:31:56.774 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:31:56.774 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:31:56.774 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:31:56.774 08:29:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:31:56.774 "params": { 00:31:56.774 "name": "Nvme0", 00:31:56.774 "trtype": "tcp", 00:31:56.774 "traddr": "10.0.0.2", 00:31:56.774 "adrfam": "ipv4", 00:31:56.774 "trsvcid": "4420", 00:31:56.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:56.774 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:56.774 "hdgst": false, 00:31:56.774 "ddgst": false 00:31:56.774 }, 00:31:56.774 "method": "bdev_nvme_attach_controller" 00:31:56.774 }' 00:31:56.774 [2024-11-20 08:29:10.668144] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:31:56.774 [2024-11-20 08:29:10.668196] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1885109 ] 00:31:56.774 [2024-11-20 08:29:10.733897] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:56.774 [2024-11-20 08:29:10.775814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:57.341 Running I/O for 10 seconds... 00:31:57.341 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:57.341 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:31:57.341 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:31:57.341 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.341 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:57.341 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.342 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:57.342 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:31:57.342 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:31:57.342 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:31:57.342 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:31:57.342 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:31:57.342 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:31:57.342 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:57.342 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:57.342 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:57.342 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.342 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:57.342 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.342 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=95 00:31:57.342 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 95 -ge 100 ']' 00:31:57.342 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:31:57.603 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:31:57.603 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:31:57.603 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:31:57.603 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:31:57.604 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.604 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:57.604 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.604 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=707 00:31:57.604 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 707 -ge 100 ']' 00:31:57.604 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:31:57.604 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:31:57.604 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:31:57.604 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:57.604 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.604 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:57.604 [2024-11-20 08:29:11.491146] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491194] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491212] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491219] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491225] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491231] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491237] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491243] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491255] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491261] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491268] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491274] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491311] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491328] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491366] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491372] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.491403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xeeeec0 is same with the state(6) to be set 00:31:57.604 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.604 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:31:57.604 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:57.604 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:31:57.604 [2024-11-20 08:29:11.498228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:57.604 [2024-11-20 08:29:11.498262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.604 [2024-11-20 08:29:11.498272] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:57.604 [2024-11-20 08:29:11.498280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.604 [2024-11-20 08:29:11.498288] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:57.604 [2024-11-20 08:29:11.498295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.604 [2024-11-20 08:29:11.498303] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:57.604 [2024-11-20 08:29:11.498310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.604 [2024-11-20 08:29:11.498316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c2500 is same with the state(6) to be set 00:31:57.604 [2024-11-20 08:29:11.498375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.604 [2024-11-20 08:29:11.498385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.604 [2024-11-20 08:29:11.498399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.604 [2024-11-20 08:29:11.498408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.604 [2024-11-20 08:29:11.498417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.604 [2024-11-20 08:29:11.498425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.604 [2024-11-20 08:29:11.498433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.604 [2024-11-20 08:29:11.498444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.604 [2024-11-20 08:29:11.498452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.604 [2024-11-20 08:29:11.498459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:106624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:107008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.498988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.498997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:107136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.605 [2024-11-20 08:29:11.499003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.605 [2024-11-20 08:29:11.499015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:107392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:107904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:108032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:108160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:108288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:108416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:108544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:108672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:108800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:108928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:109056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:109184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:109312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:109440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:109568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:109696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:109824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:109952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.499348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:110080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:57.606 [2024-11-20 08:29:11.499355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:57.606 [2024-11-20 08:29:11.500323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:31:57.606 task offset: 102016 on job bdev=Nvme0n1 fails 00:31:57.606 00:31:57.606 Latency(us) 00:31:57.606 [2024-11-20T07:29:11.634Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:57.606 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:57.606 Job: Nvme0n1 ended in about 0.41 seconds with error 00:31:57.606 Verification LBA range: start 0x0 length 0x400 00:31:57.606 Nvme0n1 : 0.41 1954.89 122.18 156.98 0.00 29503.01 1474.56 26838.55 00:31:57.606 [2024-11-20T07:29:11.634Z] =================================================================================================================== 00:31:57.606 [2024-11-20T07:29:11.634Z] Total : 1954.89 122.18 156.98 0.00 29503.01 1474.56 26838.55 00:31:57.606 [2024-11-20 08:29:11.502672] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:31:57.606 [2024-11-20 08:29:11.502693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c2500 (9): Bad file descriptor 00:31:57.606 [2024-11-20 08:29:11.505523] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:31:57.606 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:57.606 08:29:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:31:58.544 08:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 1885109 00:31:58.544 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (1885109) - No such process 00:31:58.544 08:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:31:58.544 08:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:31:58.544 08:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:31:58.544 08:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:31:58.544 08:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:31:58.544 08:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:31:58.544 08:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:31:58.544 08:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:31:58.544 { 00:31:58.544 "params": { 00:31:58.544 "name": "Nvme$subsystem", 00:31:58.544 "trtype": "$TEST_TRANSPORT", 00:31:58.544 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:58.544 "adrfam": "ipv4", 00:31:58.544 "trsvcid": "$NVMF_PORT", 00:31:58.544 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:58.544 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:58.544 "hdgst": ${hdgst:-false}, 00:31:58.544 "ddgst": ${ddgst:-false} 00:31:58.544 }, 00:31:58.544 "method": "bdev_nvme_attach_controller" 00:31:58.544 } 00:31:58.544 EOF 00:31:58.544 )") 00:31:58.544 08:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:31:58.544 08:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:31:58.544 08:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:31:58.544 08:29:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:31:58.544 "params": { 00:31:58.544 "name": "Nvme0", 00:31:58.544 "trtype": "tcp", 00:31:58.544 "traddr": "10.0.0.2", 00:31:58.544 "adrfam": "ipv4", 00:31:58.544 "trsvcid": "4420", 00:31:58.544 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:31:58.544 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:31:58.544 "hdgst": false, 00:31:58.544 "ddgst": false 00:31:58.544 }, 00:31:58.544 "method": "bdev_nvme_attach_controller" 00:31:58.544 }' 00:31:58.544 [2024-11-20 08:29:12.561585] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:31:58.544 [2024-11-20 08:29:12.561636] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1885542 ] 00:31:58.803 [2024-11-20 08:29:12.636439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:58.803 [2024-11-20 08:29:12.674975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:59.061 Running I/O for 1 seconds... 00:31:59.998 2048.00 IOPS, 128.00 MiB/s 00:31:59.998 Latency(us) 00:31:59.998 [2024-11-20T07:29:14.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:59.998 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:31:59.998 Verification LBA range: start 0x0 length 0x400 00:31:59.998 Nvme0n1 : 1.02 2071.70 129.48 0.00 0.00 30364.71 6709.64 29709.65 00:31:59.998 [2024-11-20T07:29:14.026Z] =================================================================================================================== 00:31:59.998 [2024-11-20T07:29:14.026Z] Total : 2071.70 129.48 0.00 0.00 30364.71 6709.64 29709.65 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:00.257 rmmod nvme_tcp 00:32:00.257 rmmod nvme_fabrics 00:32:00.257 rmmod nvme_keyring 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 1885022 ']' 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 1885022 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 1885022 ']' 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 1885022 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:00.257 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1885022 00:32:00.516 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:00.516 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:00.516 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1885022' 00:32:00.516 killing process with pid 1885022 00:32:00.516 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 1885022 00:32:00.516 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 1885022 00:32:00.516 [2024-11-20 08:29:14.464147] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:32:00.516 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:32:00.516 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:32:00.516 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@254 -- # local dev 00:32:00.516 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@257 -- # remove_target_ns 00:32:00.516 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:00.516 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:00.516 08:29:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@258 -- # delete_main_bridge 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@121 -- # return 0 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@274 -- # iptr 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-save 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@548 -- # iptables-restore 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:32:03.053 00:32:03.053 real 0m13.284s 00:32:03.053 user 0m19.163s 00:32:03.053 sys 0m6.295s 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:32:03.053 ************************************ 00:32:03.053 END TEST nvmf_host_management 00:32:03.053 ************************************ 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:03.053 ************************************ 00:32:03.053 START TEST nvmf_lvol 00:32:03.053 ************************************ 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:32:03.053 * Looking for test storage... 00:32:03.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:03.053 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:03.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.054 --rc genhtml_branch_coverage=1 00:32:03.054 --rc genhtml_function_coverage=1 00:32:03.054 --rc genhtml_legend=1 00:32:03.054 --rc geninfo_all_blocks=1 00:32:03.054 --rc geninfo_unexecuted_blocks=1 00:32:03.054 00:32:03.054 ' 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:03.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.054 --rc genhtml_branch_coverage=1 00:32:03.054 --rc genhtml_function_coverage=1 00:32:03.054 --rc genhtml_legend=1 00:32:03.054 --rc geninfo_all_blocks=1 00:32:03.054 --rc geninfo_unexecuted_blocks=1 00:32:03.054 00:32:03.054 ' 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:03.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.054 --rc genhtml_branch_coverage=1 00:32:03.054 --rc genhtml_function_coverage=1 00:32:03.054 --rc genhtml_legend=1 00:32:03.054 --rc geninfo_all_blocks=1 00:32:03.054 --rc geninfo_unexecuted_blocks=1 00:32:03.054 00:32:03.054 ' 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:03.054 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.054 --rc genhtml_branch_coverage=1 00:32:03.054 --rc genhtml_function_coverage=1 00:32:03.054 --rc genhtml_legend=1 00:32:03.054 --rc geninfo_all_blocks=1 00:32:03.054 --rc geninfo_unexecuted_blocks=1 00:32:03.054 00:32:03.054 ' 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # xtrace_disable 00:32:03.054 08:29:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@131 -- # pci_devs=() 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@131 -- # local -a pci_devs 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@132 -- # pci_net_devs=() 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@133 -- # pci_drivers=() 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@133 -- # local -A pci_drivers 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@135 -- # net_devs=() 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@135 -- # local -ga net_devs 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@136 -- # e810=() 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@136 -- # local -ga e810 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@137 -- # x722=() 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@137 -- # local -ga x722 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@138 -- # mlx=() 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@138 -- # local -ga mlx 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:09.625 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:09.625 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:32:09.625 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:09.626 Found net devices under 0000:86:00.0: cvl_0_0 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:09.626 Found net devices under 0000:86:00.1: cvl_0_1 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # is_hw=yes 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@247 -- # create_target_ns 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:32:09.626 10.0.0.1 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:32:09.626 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:32:09.627 10.0.0.2 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 1 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:32:09.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:09.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.455 ms 00:32:09.627 00:32:09.627 --- 10.0.0.1 ping statistics --- 00:32:09.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.627 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:32:09.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:09.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:32:09.627 00:32:09.627 --- 10.0.0.2 ping statistics --- 00:32:09.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:09.627 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair++ )) 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@270 -- # return 0 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:09.627 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=initiator1 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # return 1 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev= 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@160 -- # return 0 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target0 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target0 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # get_net_dev target1 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # local dev=target1 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # return 1 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@159 -- # dev= 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@160 -- # return 0 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:32:09.628 ' 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=1889324 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 1889324 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 1889324 ']' 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:09.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:09.628 08:29:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:09.628 [2024-11-20 08:29:22.961196] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:09.628 [2024-11-20 08:29:22.962085] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:32:09.628 [2024-11-20 08:29:22.962119] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:09.628 [2024-11-20 08:29:23.040869] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:09.628 [2024-11-20 08:29:23.083338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:09.628 [2024-11-20 08:29:23.083376] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:09.628 [2024-11-20 08:29:23.083383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:09.628 [2024-11-20 08:29:23.083390] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:09.628 [2024-11-20 08:29:23.083395] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:09.628 [2024-11-20 08:29:23.084678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:09.628 [2024-11-20 08:29:23.084782] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:09.629 [2024-11-20 08:29:23.084783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:09.629 [2024-11-20 08:29:23.152050] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:09.629 [2024-11-20 08:29:23.152852] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:09.629 [2024-11-20 08:29:23.153136] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:09.629 [2024-11-20 08:29:23.153266] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:09.888 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:09.888 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:32:09.888 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:09.888 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:09.888 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:09.888 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:09.888 08:29:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:10.147 [2024-11-20 08:29:24.001647] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:10.147 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:10.407 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:32:10.407 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:32:10.666 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:32:10.666 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:32:10.666 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:32:10.925 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=cc2e5740-a29a-475b-97cc-4520d526a8fa 00:32:10.925 08:29:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u cc2e5740-a29a-475b-97cc-4520d526a8fa lvol 20 00:32:11.185 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=6358682d-f5e1-4611-a164-d71a00e6a5ef 00:32:11.185 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:11.444 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 6358682d-f5e1-4611-a164-d71a00e6a5ef 00:32:11.444 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:11.703 [2024-11-20 08:29:25.605452] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:11.703 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:11.963 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=1889816 00:32:11.963 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:32:11.963 08:29:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:32:12.901 08:29:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 6358682d-f5e1-4611-a164-d71a00e6a5ef MY_SNAPSHOT 00:32:13.160 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=2b039e78-0664-4009-98a8-3d9d1130a66e 00:32:13.160 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 6358682d-f5e1-4611-a164-d71a00e6a5ef 30 00:32:13.418 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 2b039e78-0664-4009-98a8-3d9d1130a66e MY_CLONE 00:32:13.677 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1406c187-e8e9-4c93-a74e-96a781e5c77e 00:32:13.677 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1406c187-e8e9-4c93-a74e-96a781e5c77e 00:32:14.245 08:29:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 1889816 00:32:22.365 Initializing NVMe Controllers 00:32:22.365 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:32:22.365 Controller IO queue size 128, less than required. 00:32:22.365 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:22.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:32:22.365 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:32:22.365 Initialization complete. Launching workers. 00:32:22.365 ======================================================== 00:32:22.365 Latency(us) 00:32:22.365 Device Information : IOPS MiB/s Average min max 00:32:22.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12579.11 49.14 10181.68 1544.62 51477.39 00:32:22.365 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12691.20 49.57 10086.24 3254.28 97328.11 00:32:22.365 ======================================================== 00:32:22.365 Total : 25270.30 98.71 10133.75 1544.62 97328.11 00:32:22.365 00:32:22.365 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:22.624 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 6358682d-f5e1-4611-a164-d71a00e6a5ef 00:32:22.883 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u cc2e5740-a29a-475b-97cc-4520d526a8fa 00:32:22.883 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:32:22.883 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:32:22.883 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:32:22.883 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:22.883 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:32:22.883 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:22.883 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:32:22.883 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:22.883 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:22.883 rmmod nvme_tcp 00:32:23.142 rmmod nvme_fabrics 00:32:23.142 rmmod nvme_keyring 00:32:23.142 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:23.142 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:32:23.142 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:32:23.142 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 1889324 ']' 00:32:23.142 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 1889324 00:32:23.142 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 1889324 ']' 00:32:23.142 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 1889324 00:32:23.142 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:32:23.142 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:23.142 08:29:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1889324 00:32:23.142 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:23.142 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:23.142 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1889324' 00:32:23.142 killing process with pid 1889324 00:32:23.142 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 1889324 00:32:23.142 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 1889324 00:32:23.401 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:32:23.401 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:32:23.401 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@254 -- # local dev 00:32:23.401 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@257 -- # remove_target_ns 00:32:23.401 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:23.401 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:23.401 08:29:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:25.318 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@258 -- # delete_main_bridge 00:32:25.318 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:25.318 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@121 -- # return 0 00:32:25.318 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:25.318 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:32:25.318 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:32:25.318 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:32:25.318 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:32:25.318 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:32:25.318 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@274 -- # iptr 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-save 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@548 -- # iptables-restore 00:32:25.319 00:32:25.319 real 0m22.636s 00:32:25.319 user 0m55.863s 00:32:25.319 sys 0m10.094s 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:32:25.319 ************************************ 00:32:25.319 END TEST nvmf_lvol 00:32:25.319 ************************************ 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:25.319 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:25.583 ************************************ 00:32:25.583 START TEST nvmf_lvs_grow 00:32:25.583 ************************************ 00:32:25.583 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:32:25.583 * Looking for test storage... 00:32:25.583 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:25.583 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:25.583 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:32:25.583 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:25.583 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:25.583 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:25.583 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:25.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.584 --rc genhtml_branch_coverage=1 00:32:25.584 --rc genhtml_function_coverage=1 00:32:25.584 --rc genhtml_legend=1 00:32:25.584 --rc geninfo_all_blocks=1 00:32:25.584 --rc geninfo_unexecuted_blocks=1 00:32:25.584 00:32:25.584 ' 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:25.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.584 --rc genhtml_branch_coverage=1 00:32:25.584 --rc genhtml_function_coverage=1 00:32:25.584 --rc genhtml_legend=1 00:32:25.584 --rc geninfo_all_blocks=1 00:32:25.584 --rc geninfo_unexecuted_blocks=1 00:32:25.584 00:32:25.584 ' 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:25.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.584 --rc genhtml_branch_coverage=1 00:32:25.584 --rc genhtml_function_coverage=1 00:32:25.584 --rc genhtml_legend=1 00:32:25.584 --rc geninfo_all_blocks=1 00:32:25.584 --rc geninfo_unexecuted_blocks=1 00:32:25.584 00:32:25.584 ' 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:25.584 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:25.584 --rc genhtml_branch_coverage=1 00:32:25.584 --rc genhtml_function_coverage=1 00:32:25.584 --rc genhtml_legend=1 00:32:25.584 --rc geninfo_all_blocks=1 00:32:25.584 --rc geninfo_unexecuted_blocks=1 00:32:25.584 00:32:25.584 ' 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:25.584 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # xtrace_disable 00:32:25.585 08:29:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:32.157 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:32.157 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@131 -- # pci_devs=() 00:32:32.157 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@131 -- # local -a pci_devs 00:32:32.157 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@132 -- # pci_net_devs=() 00:32:32.157 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:32:32.157 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@133 -- # pci_drivers=() 00:32:32.157 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@133 -- # local -A pci_drivers 00:32:32.157 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@135 -- # net_devs=() 00:32:32.157 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@135 -- # local -ga net_devs 00:32:32.157 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@136 -- # e810=() 00:32:32.157 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@136 -- # local -ga e810 00:32:32.157 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@137 -- # x722=() 00:32:32.157 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@137 -- # local -ga x722 00:32:32.157 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@138 -- # mlx=() 00:32:32.157 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@138 -- # local -ga mlx 00:32:32.157 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:32.157 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:32.157 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:32.157 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:32.158 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:32.158 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:32.158 Found net devices under 0000:86:00.0: cvl_0_0 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:32.158 Found net devices under 0000:86:00.1: cvl_0_1 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # is_hw=yes 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@247 -- # create_target_ns 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:32:32.158 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:32:32.159 10.0.0.1 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:32:32.159 10.0.0.2 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 1 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:32:32.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:32.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.241 ms 00:32:32.159 00:32:32.159 --- 10.0.0.1 ping statistics --- 00:32:32.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.159 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:32:32.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:32.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.118 ms 00:32:32.159 00:32:32.159 --- 10.0.0.2 ping statistics --- 00:32:32.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:32.159 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair++ )) 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@270 -- # return 0 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:32:32.159 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator0 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=initiator1 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # return 1 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev= 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@160 -- # return 0 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target0 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target0 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # get_net_dev target1 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # local dev=target1 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # return 1 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@159 -- # dev= 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@160 -- # return 0 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:32:32.160 ' 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=1895142 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 1895142 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 1895142 ']' 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:32.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:32.160 [2024-11-20 08:29:45.668757] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:32.160 [2024-11-20 08:29:45.669728] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:32:32.160 [2024-11-20 08:29:45.669763] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:32.160 [2024-11-20 08:29:45.749227] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:32.160 [2024-11-20 08:29:45.789897] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:32.160 [2024-11-20 08:29:45.789932] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:32.160 [2024-11-20 08:29:45.789939] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:32.160 [2024-11-20 08:29:45.789945] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:32.160 [2024-11-20 08:29:45.789950] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:32.160 [2024-11-20 08:29:45.790487] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:32.160 [2024-11-20 08:29:45.855866] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:32.160 [2024-11-20 08:29:45.856078] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:32.160 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:32.161 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:32.161 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:32.161 08:29:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:32.161 [2024-11-20 08:29:46.083167] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:32.161 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:32:32.161 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:32:32.161 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:32.161 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:32.161 ************************************ 00:32:32.161 START TEST lvs_grow_clean 00:32:32.161 ************************************ 00:32:32.161 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:32:32.161 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:32.161 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:32.161 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:32.161 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:32.161 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:32.161 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:32.161 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:32.161 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:32.161 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:32.420 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:32.420 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:32.680 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=b8c6c2e9-8fe7-401f-a664-3eeef89dee96 00:32:32.680 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8c6c2e9-8fe7-401f-a664-3eeef89dee96 00:32:32.680 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:32.939 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:32.939 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:32.939 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b8c6c2e9-8fe7-401f-a664-3eeef89dee96 lvol 150 00:32:32.939 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=10c642fe-7fcb-49ac-805b-672c31540065 00:32:32.939 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:32.939 08:29:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:33.199 [2024-11-20 08:29:47.126887] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:33.199 [2024-11-20 08:29:47.127019] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:33.199 true 00:32:33.199 08:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8c6c2e9-8fe7-401f-a664-3eeef89dee96 00:32:33.199 08:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:33.458 08:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:33.458 08:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:33.716 08:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 10c642fe-7fcb-49ac-805b-672c31540065 00:32:33.716 08:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:33.976 [2024-11-20 08:29:47.899399] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:33.976 08:29:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:34.235 08:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1895479 00:32:34.235 08:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:34.235 08:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:34.235 08:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1895479 /var/tmp/bdevperf.sock 00:32:34.235 08:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 1895479 ']' 00:32:34.235 08:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:34.235 08:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:34.235 08:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:34.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:34.235 08:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:34.235 08:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:34.235 [2024-11-20 08:29:48.157266] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:32:34.235 [2024-11-20 08:29:48.157312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1895479 ] 00:32:34.235 [2024-11-20 08:29:48.231350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:34.494 [2024-11-20 08:29:48.273216] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:34.494 08:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:34.494 08:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:32:34.494 08:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:34.753 Nvme0n1 00:32:34.753 08:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:35.012 [ 00:32:35.012 { 00:32:35.012 "name": "Nvme0n1", 00:32:35.012 "aliases": [ 00:32:35.012 "10c642fe-7fcb-49ac-805b-672c31540065" 00:32:35.012 ], 00:32:35.012 "product_name": "NVMe disk", 00:32:35.012 "block_size": 4096, 00:32:35.012 "num_blocks": 38912, 00:32:35.012 "uuid": "10c642fe-7fcb-49ac-805b-672c31540065", 00:32:35.012 "numa_id": 1, 00:32:35.012 "assigned_rate_limits": { 00:32:35.012 "rw_ios_per_sec": 0, 00:32:35.012 "rw_mbytes_per_sec": 0, 00:32:35.012 "r_mbytes_per_sec": 0, 00:32:35.012 "w_mbytes_per_sec": 0 00:32:35.012 }, 00:32:35.012 "claimed": false, 00:32:35.012 "zoned": false, 00:32:35.012 "supported_io_types": { 00:32:35.012 "read": true, 00:32:35.012 "write": true, 00:32:35.012 "unmap": true, 00:32:35.012 "flush": true, 00:32:35.012 "reset": true, 00:32:35.012 "nvme_admin": true, 00:32:35.012 "nvme_io": true, 00:32:35.012 "nvme_io_md": false, 00:32:35.012 "write_zeroes": true, 00:32:35.012 "zcopy": false, 00:32:35.012 "get_zone_info": false, 00:32:35.012 "zone_management": false, 00:32:35.012 "zone_append": false, 00:32:35.012 "compare": true, 00:32:35.012 "compare_and_write": true, 00:32:35.012 "abort": true, 00:32:35.012 "seek_hole": false, 00:32:35.012 "seek_data": false, 00:32:35.012 "copy": true, 00:32:35.012 "nvme_iov_md": false 00:32:35.012 }, 00:32:35.012 "memory_domains": [ 00:32:35.012 { 00:32:35.012 "dma_device_id": "system", 00:32:35.012 "dma_device_type": 1 00:32:35.012 } 00:32:35.012 ], 00:32:35.012 "driver_specific": { 00:32:35.012 "nvme": [ 00:32:35.012 { 00:32:35.012 "trid": { 00:32:35.012 "trtype": "TCP", 00:32:35.012 "adrfam": "IPv4", 00:32:35.012 "traddr": "10.0.0.2", 00:32:35.012 "trsvcid": "4420", 00:32:35.012 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:35.012 }, 00:32:35.012 "ctrlr_data": { 00:32:35.012 "cntlid": 1, 00:32:35.012 "vendor_id": "0x8086", 00:32:35.012 "model_number": "SPDK bdev Controller", 00:32:35.012 "serial_number": "SPDK0", 00:32:35.013 "firmware_revision": "25.01", 00:32:35.013 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:35.013 "oacs": { 00:32:35.013 "security": 0, 00:32:35.013 "format": 0, 00:32:35.013 "firmware": 0, 00:32:35.013 "ns_manage": 0 00:32:35.013 }, 00:32:35.013 "multi_ctrlr": true, 00:32:35.013 "ana_reporting": false 00:32:35.013 }, 00:32:35.013 "vs": { 00:32:35.013 "nvme_version": "1.3" 00:32:35.013 }, 00:32:35.013 "ns_data": { 00:32:35.013 "id": 1, 00:32:35.013 "can_share": true 00:32:35.013 } 00:32:35.013 } 00:32:35.013 ], 00:32:35.013 "mp_policy": "active_passive" 00:32:35.013 } 00:32:35.013 } 00:32:35.013 ] 00:32:35.013 08:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1895699 00:32:35.013 08:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:35.013 08:29:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:35.013 Running I/O for 10 seconds... 00:32:35.950 Latency(us) 00:32:35.950 [2024-11-20T07:29:49.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:35.950 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:35.950 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:32:35.950 [2024-11-20T07:29:49.978Z] =================================================================================================================== 00:32:35.950 [2024-11-20T07:29:49.978Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:32:35.950 00:32:36.887 08:29:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b8c6c2e9-8fe7-401f-a664-3eeef89dee96 00:32:36.887 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:36.887 Nvme0n1 : 2.00 22828.50 89.17 0.00 0.00 0.00 0.00 0.00 00:32:36.887 [2024-11-20T07:29:50.915Z] =================================================================================================================== 00:32:36.887 [2024-11-20T07:29:50.915Z] Total : 22828.50 89.17 0.00 0.00 0.00 0.00 0.00 00:32:36.887 00:32:37.145 true 00:32:37.145 08:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8c6c2e9-8fe7-401f-a664-3eeef89dee96 00:32:37.145 08:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:37.404 08:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:37.404 08:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:37.404 08:29:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 1895699 00:32:37.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:37.972 Nvme0n1 : 3.00 23008.33 89.88 0.00 0.00 0.00 0.00 0.00 00:32:37.972 [2024-11-20T07:29:52.000Z] =================================================================================================================== 00:32:37.972 [2024-11-20T07:29:52.000Z] Total : 23008.33 89.88 0.00 0.00 0.00 0.00 0.00 00:32:37.972 00:32:38.909 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:38.909 Nvme0n1 : 4.00 23130.00 90.35 0.00 0.00 0.00 0.00 0.00 00:32:38.909 [2024-11-20T07:29:52.937Z] =================================================================================================================== 00:32:38.909 [2024-11-20T07:29:52.937Z] Total : 23130.00 90.35 0.00 0.00 0.00 0.00 0.00 00:32:38.909 00:32:40.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:40.290 Nvme0n1 : 5.00 23228.40 90.74 0.00 0.00 0.00 0.00 0.00 00:32:40.290 [2024-11-20T07:29:54.318Z] =================================================================================================================== 00:32:40.290 [2024-11-20T07:29:54.318Z] Total : 23228.40 90.74 0.00 0.00 0.00 0.00 0.00 00:32:40.290 00:32:41.227 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:41.227 Nvme0n1 : 6.00 23294.00 90.99 0.00 0.00 0.00 0.00 0.00 00:32:41.227 [2024-11-20T07:29:55.255Z] =================================================================================================================== 00:32:41.227 [2024-11-20T07:29:55.255Z] Total : 23294.00 90.99 0.00 0.00 0.00 0.00 0.00 00:32:41.227 00:32:42.165 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:42.165 Nvme0n1 : 7.00 23350.00 91.21 0.00 0.00 0.00 0.00 0.00 00:32:42.165 [2024-11-20T07:29:56.193Z] =================================================================================================================== 00:32:42.165 [2024-11-20T07:29:56.193Z] Total : 23350.00 91.21 0.00 0.00 0.00 0.00 0.00 00:32:42.165 00:32:43.102 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:43.102 Nvme0n1 : 8.00 23390.12 91.37 0.00 0.00 0.00 0.00 0.00 00:32:43.102 [2024-11-20T07:29:57.130Z] =================================================================================================================== 00:32:43.102 [2024-11-20T07:29:57.130Z] Total : 23390.12 91.37 0.00 0.00 0.00 0.00 0.00 00:32:43.102 00:32:44.039 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:44.039 Nvme0n1 : 9.00 23430.00 91.52 0.00 0.00 0.00 0.00 0.00 00:32:44.039 [2024-11-20T07:29:58.067Z] =================================================================================================================== 00:32:44.039 [2024-11-20T07:29:58.067Z] Total : 23430.00 91.52 0.00 0.00 0.00 0.00 0.00 00:32:44.039 00:32:44.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:44.977 Nvme0n1 : 10.00 23449.20 91.60 0.00 0.00 0.00 0.00 0.00 00:32:44.977 [2024-11-20T07:29:59.005Z] =================================================================================================================== 00:32:44.977 [2024-11-20T07:29:59.005Z] Total : 23449.20 91.60 0.00 0.00 0.00 0.00 0.00 00:32:44.977 00:32:44.977 00:32:44.977 Latency(us) 00:32:44.977 [2024-11-20T07:29:59.005Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:44.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:44.977 Nvme0n1 : 10.01 23448.95 91.60 0.00 0.00 5455.81 3120.76 25215.76 00:32:44.977 [2024-11-20T07:29:59.005Z] =================================================================================================================== 00:32:44.977 [2024-11-20T07:29:59.005Z] Total : 23448.95 91.60 0.00 0.00 5455.81 3120.76 25215.76 00:32:44.977 { 00:32:44.977 "results": [ 00:32:44.977 { 00:32:44.977 "job": "Nvme0n1", 00:32:44.977 "core_mask": "0x2", 00:32:44.977 "workload": "randwrite", 00:32:44.977 "status": "finished", 00:32:44.977 "queue_depth": 128, 00:32:44.977 "io_size": 4096, 00:32:44.977 "runtime": 10.005566, 00:32:44.977 "iops": 23448.948315367667, 00:32:44.977 "mibps": 91.59745435690495, 00:32:44.977 "io_failed": 0, 00:32:44.977 "io_timeout": 0, 00:32:44.977 "avg_latency_us": 5455.805759294664, 00:32:44.978 "min_latency_us": 3120.7619047619046, 00:32:44.978 "max_latency_us": 25215.75619047619 00:32:44.978 } 00:32:44.978 ], 00:32:44.978 "core_count": 1 00:32:44.978 } 00:32:44.978 08:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1895479 00:32:44.978 08:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 1895479 ']' 00:32:44.978 08:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 1895479 00:32:44.978 08:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:32:44.978 08:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:44.978 08:29:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1895479 00:32:45.237 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:45.237 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:45.237 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1895479' 00:32:45.237 killing process with pid 1895479 00:32:45.237 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 1895479 00:32:45.237 Received shutdown signal, test time was about 10.000000 seconds 00:32:45.237 00:32:45.237 Latency(us) 00:32:45.237 [2024-11-20T07:29:59.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:45.237 [2024-11-20T07:29:59.265Z] =================================================================================================================== 00:32:45.237 [2024-11-20T07:29:59.265Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:45.237 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 1895479 00:32:45.237 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:45.497 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:32:45.755 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8c6c2e9-8fe7-401f-a664-3eeef89dee96 00:32:45.755 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:32:45.755 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:32:45.755 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:32:45.755 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:46.015 [2024-11-20 08:29:59.910980] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:32:46.015 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8c6c2e9-8fe7-401f-a664-3eeef89dee96 00:32:46.015 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:32:46.015 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8c6c2e9-8fe7-401f-a664-3eeef89dee96 00:32:46.015 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:46.015 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:46.015 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:46.015 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:46.015 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:46.015 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:32:46.015 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:46.015 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:32:46.015 08:29:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8c6c2e9-8fe7-401f-a664-3eeef89dee96 00:32:46.275 request: 00:32:46.275 { 00:32:46.275 "uuid": "b8c6c2e9-8fe7-401f-a664-3eeef89dee96", 00:32:46.275 "method": "bdev_lvol_get_lvstores", 00:32:46.275 "req_id": 1 00:32:46.275 } 00:32:46.275 Got JSON-RPC error response 00:32:46.275 response: 00:32:46.275 { 00:32:46.275 "code": -19, 00:32:46.275 "message": "No such device" 00:32:46.275 } 00:32:46.275 08:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:32:46.275 08:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:32:46.275 08:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:32:46.275 08:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:32:46.275 08:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:46.534 aio_bdev 00:32:46.534 08:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 10c642fe-7fcb-49ac-805b-672c31540065 00:32:46.534 08:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=10c642fe-7fcb-49ac-805b-672c31540065 00:32:46.534 08:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:32:46.534 08:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:32:46.534 08:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:32:46.534 08:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:32:46.534 08:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:32:46.858 08:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 10c642fe-7fcb-49ac-805b-672c31540065 -t 2000 00:32:46.858 [ 00:32:46.858 { 00:32:46.858 "name": "10c642fe-7fcb-49ac-805b-672c31540065", 00:32:46.858 "aliases": [ 00:32:46.858 "lvs/lvol" 00:32:46.858 ], 00:32:46.858 "product_name": "Logical Volume", 00:32:46.858 "block_size": 4096, 00:32:46.858 "num_blocks": 38912, 00:32:46.858 "uuid": "10c642fe-7fcb-49ac-805b-672c31540065", 00:32:46.858 "assigned_rate_limits": { 00:32:46.858 "rw_ios_per_sec": 0, 00:32:46.858 "rw_mbytes_per_sec": 0, 00:32:46.858 "r_mbytes_per_sec": 0, 00:32:46.858 "w_mbytes_per_sec": 0 00:32:46.858 }, 00:32:46.858 "claimed": false, 00:32:46.858 "zoned": false, 00:32:46.858 "supported_io_types": { 00:32:46.858 "read": true, 00:32:46.858 "write": true, 00:32:46.858 "unmap": true, 00:32:46.858 "flush": false, 00:32:46.858 "reset": true, 00:32:46.858 "nvme_admin": false, 00:32:46.858 "nvme_io": false, 00:32:46.858 "nvme_io_md": false, 00:32:46.858 "write_zeroes": true, 00:32:46.858 "zcopy": false, 00:32:46.858 "get_zone_info": false, 00:32:46.858 "zone_management": false, 00:32:46.858 "zone_append": false, 00:32:46.858 "compare": false, 00:32:46.858 "compare_and_write": false, 00:32:46.858 "abort": false, 00:32:46.858 "seek_hole": true, 00:32:46.858 "seek_data": true, 00:32:46.858 "copy": false, 00:32:46.858 "nvme_iov_md": false 00:32:46.858 }, 00:32:46.858 "driver_specific": { 00:32:46.858 "lvol": { 00:32:46.858 "lvol_store_uuid": "b8c6c2e9-8fe7-401f-a664-3eeef89dee96", 00:32:46.858 "base_bdev": "aio_bdev", 00:32:46.858 "thin_provision": false, 00:32:46.858 "num_allocated_clusters": 38, 00:32:46.858 "snapshot": false, 00:32:46.858 "clone": false, 00:32:46.858 "esnap_clone": false 00:32:46.858 } 00:32:46.858 } 00:32:46.858 } 00:32:46.858 ] 00:32:46.858 08:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:32:46.858 08:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8c6c2e9-8fe7-401f-a664-3eeef89dee96 00:32:46.858 08:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:32:47.167 08:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:32:47.167 08:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b8c6c2e9-8fe7-401f-a664-3eeef89dee96 00:32:47.167 08:30:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:32:47.167 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:32:47.167 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 10c642fe-7fcb-49ac-805b-672c31540065 00:32:47.434 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b8c6c2e9-8fe7-401f-a664-3eeef89dee96 00:32:47.693 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:32:47.952 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:47.952 00:32:47.952 real 0m15.660s 00:32:47.952 user 0m15.198s 00:32:47.952 sys 0m1.462s 00:32:47.952 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:47.952 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:32:47.952 ************************************ 00:32:47.953 END TEST lvs_grow_clean 00:32:47.953 ************************************ 00:32:47.953 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:32:47.953 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:47.953 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:47.953 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:32:47.953 ************************************ 00:32:47.953 START TEST lvs_grow_dirty 00:32:47.953 ************************************ 00:32:47.953 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:32:47.953 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:32:47.953 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:32:47.953 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:32:47.953 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:32:47.953 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:32:47.953 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:32:47.953 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:47.953 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:47.953 08:30:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:32:48.212 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:32:48.212 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:32:48.471 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=064f26cd-cece-4ef1-b2da-de55dbab5f73 00:32:48.471 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 064f26cd-cece-4ef1-b2da-de55dbab5f73 00:32:48.471 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:32:48.471 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:32:48.471 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:32:48.471 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 064f26cd-cece-4ef1-b2da-de55dbab5f73 lvol 150 00:32:48.729 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=3cc1371c-2b0e-4d05-8df1-3169b451ec48 00:32:48.729 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:32:48.729 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:32:48.988 [2024-11-20 08:30:02.842884] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:32:48.988 [2024-11-20 08:30:02.843011] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:32:48.988 true 00:32:48.988 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 064f26cd-cece-4ef1-b2da-de55dbab5f73 00:32:48.988 08:30:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:32:49.247 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:32:49.247 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:32:49.247 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3cc1371c-2b0e-4d05-8df1-3169b451ec48 00:32:49.506 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:32:49.766 [2024-11-20 08:30:03.567350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:49.766 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:49.766 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=1898189 00:32:49.766 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:49.766 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 1898189 /var/tmp/bdevperf.sock 00:32:49.766 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1898189 ']' 00:32:49.766 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:49.766 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:49.766 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:49.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:49.766 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:49.766 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:32:49.766 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:32:50.026 [2024-11-20 08:30:03.806555] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:32:50.026 [2024-11-20 08:30:03.806603] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1898189 ] 00:32:50.026 [2024-11-20 08:30:03.866401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:50.026 [2024-11-20 08:30:03.906616] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:50.026 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:50.026 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:32:50.026 08:30:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:32:50.594 Nvme0n1 00:32:50.594 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:32:50.594 [ 00:32:50.594 { 00:32:50.594 "name": "Nvme0n1", 00:32:50.594 "aliases": [ 00:32:50.594 "3cc1371c-2b0e-4d05-8df1-3169b451ec48" 00:32:50.594 ], 00:32:50.594 "product_name": "NVMe disk", 00:32:50.594 "block_size": 4096, 00:32:50.594 "num_blocks": 38912, 00:32:50.594 "uuid": "3cc1371c-2b0e-4d05-8df1-3169b451ec48", 00:32:50.594 "numa_id": 1, 00:32:50.594 "assigned_rate_limits": { 00:32:50.594 "rw_ios_per_sec": 0, 00:32:50.594 "rw_mbytes_per_sec": 0, 00:32:50.594 "r_mbytes_per_sec": 0, 00:32:50.594 "w_mbytes_per_sec": 0 00:32:50.594 }, 00:32:50.594 "claimed": false, 00:32:50.594 "zoned": false, 00:32:50.594 "supported_io_types": { 00:32:50.594 "read": true, 00:32:50.594 "write": true, 00:32:50.594 "unmap": true, 00:32:50.594 "flush": true, 00:32:50.594 "reset": true, 00:32:50.594 "nvme_admin": true, 00:32:50.594 "nvme_io": true, 00:32:50.594 "nvme_io_md": false, 00:32:50.594 "write_zeroes": true, 00:32:50.594 "zcopy": false, 00:32:50.594 "get_zone_info": false, 00:32:50.594 "zone_management": false, 00:32:50.594 "zone_append": false, 00:32:50.594 "compare": true, 00:32:50.594 "compare_and_write": true, 00:32:50.594 "abort": true, 00:32:50.594 "seek_hole": false, 00:32:50.594 "seek_data": false, 00:32:50.594 "copy": true, 00:32:50.594 "nvme_iov_md": false 00:32:50.594 }, 00:32:50.594 "memory_domains": [ 00:32:50.594 { 00:32:50.594 "dma_device_id": "system", 00:32:50.594 "dma_device_type": 1 00:32:50.594 } 00:32:50.594 ], 00:32:50.594 "driver_specific": { 00:32:50.594 "nvme": [ 00:32:50.594 { 00:32:50.594 "trid": { 00:32:50.594 "trtype": "TCP", 00:32:50.594 "adrfam": "IPv4", 00:32:50.594 "traddr": "10.0.0.2", 00:32:50.594 "trsvcid": "4420", 00:32:50.594 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:32:50.594 }, 00:32:50.594 "ctrlr_data": { 00:32:50.594 "cntlid": 1, 00:32:50.594 "vendor_id": "0x8086", 00:32:50.594 "model_number": "SPDK bdev Controller", 00:32:50.594 "serial_number": "SPDK0", 00:32:50.594 "firmware_revision": "25.01", 00:32:50.594 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:32:50.594 "oacs": { 00:32:50.595 "security": 0, 00:32:50.595 "format": 0, 00:32:50.595 "firmware": 0, 00:32:50.595 "ns_manage": 0 00:32:50.595 }, 00:32:50.595 "multi_ctrlr": true, 00:32:50.595 "ana_reporting": false 00:32:50.595 }, 00:32:50.595 "vs": { 00:32:50.595 "nvme_version": "1.3" 00:32:50.595 }, 00:32:50.595 "ns_data": { 00:32:50.595 "id": 1, 00:32:50.595 "can_share": true 00:32:50.595 } 00:32:50.595 } 00:32:50.595 ], 00:32:50.595 "mp_policy": "active_passive" 00:32:50.595 } 00:32:50.595 } 00:32:50.595 ] 00:32:50.595 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:50.595 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=1898413 00:32:50.595 08:30:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:32:50.595 Running I/O for 10 seconds... 00:32:51.971 Latency(us) 00:32:51.971 [2024-11-20T07:30:05.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:51.971 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:51.971 Nvme0n1 : 1.00 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:32:51.971 [2024-11-20T07:30:05.999Z] =================================================================================================================== 00:32:51.971 [2024-11-20T07:30:05.999Z] Total : 22733.00 88.80 0.00 0.00 0.00 0.00 0.00 00:32:51.971 00:32:52.540 08:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 064f26cd-cece-4ef1-b2da-de55dbab5f73 00:32:52.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:52.798 Nvme0n1 : 2.00 22877.00 89.36 0.00 0.00 0.00 0.00 0.00 00:32:52.798 [2024-11-20T07:30:06.826Z] =================================================================================================================== 00:32:52.798 [2024-11-20T07:30:06.826Z] Total : 22877.00 89.36 0.00 0.00 0.00 0.00 0.00 00:32:52.798 00:32:52.798 true 00:32:52.798 08:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 064f26cd-cece-4ef1-b2da-de55dbab5f73 00:32:52.798 08:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:32:53.057 08:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:32:53.057 08:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:32:53.057 08:30:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 1898413 00:32:53.622 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:53.622 Nvme0n1 : 3.00 22871.33 89.34 0.00 0.00 0.00 0.00 0.00 00:32:53.622 [2024-11-20T07:30:07.650Z] =================================================================================================================== 00:32:53.622 [2024-11-20T07:30:07.650Z] Total : 22871.33 89.34 0.00 0.00 0.00 0.00 0.00 00:32:53.622 00:32:55.001 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:55.001 Nvme0n1 : 4.00 23035.75 89.98 0.00 0.00 0.00 0.00 0.00 00:32:55.001 [2024-11-20T07:30:09.029Z] =================================================================================================================== 00:32:55.001 [2024-11-20T07:30:09.029Z] Total : 23035.75 89.98 0.00 0.00 0.00 0.00 0.00 00:32:55.001 00:32:55.938 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:55.938 Nvme0n1 : 5.00 23153.00 90.44 0.00 0.00 0.00 0.00 0.00 00:32:55.938 [2024-11-20T07:30:09.966Z] =================================================================================================================== 00:32:55.938 [2024-11-20T07:30:09.966Z] Total : 23153.00 90.44 0.00 0.00 0.00 0.00 0.00 00:32:55.938 00:32:56.874 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:56.874 Nvme0n1 : 6.00 23114.83 90.29 0.00 0.00 0.00 0.00 0.00 00:32:56.874 [2024-11-20T07:30:10.902Z] =================================================================================================================== 00:32:56.874 [2024-11-20T07:30:10.902Z] Total : 23114.83 90.29 0.00 0.00 0.00 0.00 0.00 00:32:56.874 00:32:57.811 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:57.811 Nvme0n1 : 7.00 23185.14 90.57 0.00 0.00 0.00 0.00 0.00 00:32:57.811 [2024-11-20T07:30:11.839Z] =================================================================================================================== 00:32:57.811 [2024-11-20T07:30:11.839Z] Total : 23185.14 90.57 0.00 0.00 0.00 0.00 0.00 00:32:57.811 00:32:58.746 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:58.746 Nvme0n1 : 8.00 23247.75 90.81 0.00 0.00 0.00 0.00 0.00 00:32:58.746 [2024-11-20T07:30:12.774Z] =================================================================================================================== 00:32:58.746 [2024-11-20T07:30:12.774Z] Total : 23247.75 90.81 0.00 0.00 0.00 0.00 0.00 00:32:58.746 00:32:59.683 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:32:59.683 Nvme0n1 : 9.00 23301.78 91.02 0.00 0.00 0.00 0.00 0.00 00:32:59.683 [2024-11-20T07:30:13.711Z] =================================================================================================================== 00:32:59.683 [2024-11-20T07:30:13.711Z] Total : 23301.78 91.02 0.00 0.00 0.00 0.00 0.00 00:32:59.683 00:33:00.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:00.621 Nvme0n1 : 10.00 23333.80 91.15 0.00 0.00 0.00 0.00 0.00 00:33:00.621 [2024-11-20T07:30:14.649Z] =================================================================================================================== 00:33:00.621 [2024-11-20T07:30:14.649Z] Total : 23333.80 91.15 0.00 0.00 0.00 0.00 0.00 00:33:00.621 00:33:00.621 00:33:00.621 Latency(us) 00:33:00.621 [2024-11-20T07:30:14.649Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.621 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:00.621 Nvme0n1 : 10.00 23337.38 91.16 0.00 0.00 5481.96 3136.37 26339.23 00:33:00.621 [2024-11-20T07:30:14.649Z] =================================================================================================================== 00:33:00.621 [2024-11-20T07:30:14.649Z] Total : 23337.38 91.16 0.00 0.00 5481.96 3136.37 26339.23 00:33:00.621 { 00:33:00.621 "results": [ 00:33:00.621 { 00:33:00.621 "job": "Nvme0n1", 00:33:00.621 "core_mask": "0x2", 00:33:00.621 "workload": "randwrite", 00:33:00.621 "status": "finished", 00:33:00.621 "queue_depth": 128, 00:33:00.621 "io_size": 4096, 00:33:00.621 "runtime": 10.003952, 00:33:00.622 "iops": 23337.377068582497, 00:33:00.622 "mibps": 91.16162917415038, 00:33:00.622 "io_failed": 0, 00:33:00.622 "io_timeout": 0, 00:33:00.622 "avg_latency_us": 5481.963036428676, 00:33:00.622 "min_latency_us": 3136.365714285714, 00:33:00.622 "max_latency_us": 26339.230476190478 00:33:00.622 } 00:33:00.622 ], 00:33:00.622 "core_count": 1 00:33:00.622 } 00:33:00.881 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 1898189 00:33:00.881 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 1898189 ']' 00:33:00.881 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 1898189 00:33:00.881 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:33:00.881 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:00.881 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1898189 00:33:00.881 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:00.881 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:00.881 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1898189' 00:33:00.881 killing process with pid 1898189 00:33:00.881 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 1898189 00:33:00.881 Received shutdown signal, test time was about 10.000000 seconds 00:33:00.881 00:33:00.881 Latency(us) 00:33:00.881 [2024-11-20T07:30:14.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.881 [2024-11-20T07:30:14.909Z] =================================================================================================================== 00:33:00.881 [2024-11-20T07:30:14.909Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:00.881 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 1898189 00:33:00.881 08:30:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:01.140 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:01.399 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 064f26cd-cece-4ef1-b2da-de55dbab5f73 00:33:01.399 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:33:01.659 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:33:01.659 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:33:01.659 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 1895142 00:33:01.659 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 1895142 00:33:01.659 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 1895142 Killed "${NVMF_APP[@]}" "$@" 00:33:01.659 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:33:01.659 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:33:01.659 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:01.659 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:01.659 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:01.659 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=1900417 00:33:01.659 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 1900417 00:33:01.659 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:33:01.659 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 1900417 ']' 00:33:01.659 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:01.659 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:01.659 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:01.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:01.659 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:01.659 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:01.659 [2024-11-20 08:30:15.547546] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:01.659 [2024-11-20 08:30:15.548468] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:33:01.659 [2024-11-20 08:30:15.548515] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:01.659 [2024-11-20 08:30:15.624882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:01.659 [2024-11-20 08:30:15.665253] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:01.659 [2024-11-20 08:30:15.665289] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:01.659 [2024-11-20 08:30:15.665297] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:01.659 [2024-11-20 08:30:15.665305] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:01.659 [2024-11-20 08:30:15.665311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:01.659 [2024-11-20 08:30:15.665860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:01.918 [2024-11-20 08:30:15.733410] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:01.918 [2024-11-20 08:30:15.733620] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:01.918 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:01.918 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:33:01.918 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:01.918 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:01.918 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:01.918 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:01.918 08:30:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:02.177 [2024-11-20 08:30:15.979361] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:33:02.177 [2024-11-20 08:30:15.979554] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:33:02.177 [2024-11-20 08:30:15.979636] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:33:02.177 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:33:02.177 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 3cc1371c-2b0e-4d05-8df1-3169b451ec48 00:33:02.177 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=3cc1371c-2b0e-4d05-8df1-3169b451ec48 00:33:02.177 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:02.177 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:02.177 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:02.177 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:02.177 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:02.436 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3cc1371c-2b0e-4d05-8df1-3169b451ec48 -t 2000 00:33:02.436 [ 00:33:02.436 { 00:33:02.436 "name": "3cc1371c-2b0e-4d05-8df1-3169b451ec48", 00:33:02.436 "aliases": [ 00:33:02.436 "lvs/lvol" 00:33:02.436 ], 00:33:02.436 "product_name": "Logical Volume", 00:33:02.436 "block_size": 4096, 00:33:02.436 "num_blocks": 38912, 00:33:02.436 "uuid": "3cc1371c-2b0e-4d05-8df1-3169b451ec48", 00:33:02.436 "assigned_rate_limits": { 00:33:02.436 "rw_ios_per_sec": 0, 00:33:02.436 "rw_mbytes_per_sec": 0, 00:33:02.436 "r_mbytes_per_sec": 0, 00:33:02.436 "w_mbytes_per_sec": 0 00:33:02.436 }, 00:33:02.436 "claimed": false, 00:33:02.436 "zoned": false, 00:33:02.436 "supported_io_types": { 00:33:02.436 "read": true, 00:33:02.436 "write": true, 00:33:02.436 "unmap": true, 00:33:02.436 "flush": false, 00:33:02.436 "reset": true, 00:33:02.436 "nvme_admin": false, 00:33:02.436 "nvme_io": false, 00:33:02.436 "nvme_io_md": false, 00:33:02.436 "write_zeroes": true, 00:33:02.436 "zcopy": false, 00:33:02.436 "get_zone_info": false, 00:33:02.436 "zone_management": false, 00:33:02.436 "zone_append": false, 00:33:02.436 "compare": false, 00:33:02.436 "compare_and_write": false, 00:33:02.436 "abort": false, 00:33:02.436 "seek_hole": true, 00:33:02.436 "seek_data": true, 00:33:02.436 "copy": false, 00:33:02.436 "nvme_iov_md": false 00:33:02.436 }, 00:33:02.436 "driver_specific": { 00:33:02.436 "lvol": { 00:33:02.436 "lvol_store_uuid": "064f26cd-cece-4ef1-b2da-de55dbab5f73", 00:33:02.436 "base_bdev": "aio_bdev", 00:33:02.436 "thin_provision": false, 00:33:02.436 "num_allocated_clusters": 38, 00:33:02.436 "snapshot": false, 00:33:02.436 "clone": false, 00:33:02.436 "esnap_clone": false 00:33:02.436 } 00:33:02.436 } 00:33:02.436 } 00:33:02.436 ] 00:33:02.436 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:02.437 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 064f26cd-cece-4ef1-b2da-de55dbab5f73 00:33:02.437 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:33:02.695 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:33:02.695 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 064f26cd-cece-4ef1-b2da-de55dbab5f73 00:33:02.695 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:33:02.954 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:33:02.954 08:30:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:02.954 [2024-11-20 08:30:16.970344] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:33:03.213 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 064f26cd-cece-4ef1-b2da-de55dbab5f73 00:33:03.213 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:33:03.213 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 064f26cd-cece-4ef1-b2da-de55dbab5f73 00:33:03.213 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:03.213 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:03.213 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:03.213 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:03.213 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:03.213 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:03.213 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:03.213 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:33:03.213 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 064f26cd-cece-4ef1-b2da-de55dbab5f73 00:33:03.213 request: 00:33:03.213 { 00:33:03.213 "uuid": "064f26cd-cece-4ef1-b2da-de55dbab5f73", 00:33:03.213 "method": "bdev_lvol_get_lvstores", 00:33:03.213 "req_id": 1 00:33:03.213 } 00:33:03.213 Got JSON-RPC error response 00:33:03.213 response: 00:33:03.213 { 00:33:03.213 "code": -19, 00:33:03.213 "message": "No such device" 00:33:03.213 } 00:33:03.214 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:33:03.214 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:03.214 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:03.214 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:03.214 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:33:03.473 aio_bdev 00:33:03.473 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3cc1371c-2b0e-4d05-8df1-3169b451ec48 00:33:03.473 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=3cc1371c-2b0e-4d05-8df1-3169b451ec48 00:33:03.473 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:33:03.473 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:33:03.473 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:33:03.473 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:33:03.473 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:33:03.732 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3cc1371c-2b0e-4d05-8df1-3169b451ec48 -t 2000 00:33:03.991 [ 00:33:03.991 { 00:33:03.991 "name": "3cc1371c-2b0e-4d05-8df1-3169b451ec48", 00:33:03.991 "aliases": [ 00:33:03.991 "lvs/lvol" 00:33:03.991 ], 00:33:03.991 "product_name": "Logical Volume", 00:33:03.991 "block_size": 4096, 00:33:03.991 "num_blocks": 38912, 00:33:03.991 "uuid": "3cc1371c-2b0e-4d05-8df1-3169b451ec48", 00:33:03.991 "assigned_rate_limits": { 00:33:03.991 "rw_ios_per_sec": 0, 00:33:03.991 "rw_mbytes_per_sec": 0, 00:33:03.991 "r_mbytes_per_sec": 0, 00:33:03.991 "w_mbytes_per_sec": 0 00:33:03.991 }, 00:33:03.991 "claimed": false, 00:33:03.991 "zoned": false, 00:33:03.991 "supported_io_types": { 00:33:03.991 "read": true, 00:33:03.991 "write": true, 00:33:03.991 "unmap": true, 00:33:03.991 "flush": false, 00:33:03.991 "reset": true, 00:33:03.991 "nvme_admin": false, 00:33:03.991 "nvme_io": false, 00:33:03.991 "nvme_io_md": false, 00:33:03.991 "write_zeroes": true, 00:33:03.991 "zcopy": false, 00:33:03.991 "get_zone_info": false, 00:33:03.991 "zone_management": false, 00:33:03.991 "zone_append": false, 00:33:03.991 "compare": false, 00:33:03.991 "compare_and_write": false, 00:33:03.991 "abort": false, 00:33:03.991 "seek_hole": true, 00:33:03.991 "seek_data": true, 00:33:03.991 "copy": false, 00:33:03.991 "nvme_iov_md": false 00:33:03.991 }, 00:33:03.991 "driver_specific": { 00:33:03.991 "lvol": { 00:33:03.991 "lvol_store_uuid": "064f26cd-cece-4ef1-b2da-de55dbab5f73", 00:33:03.991 "base_bdev": "aio_bdev", 00:33:03.991 "thin_provision": false, 00:33:03.991 "num_allocated_clusters": 38, 00:33:03.991 "snapshot": false, 00:33:03.991 "clone": false, 00:33:03.991 "esnap_clone": false 00:33:03.991 } 00:33:03.991 } 00:33:03.991 } 00:33:03.991 ] 00:33:03.991 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:33:03.991 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:33:03.991 08:30:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 064f26cd-cece-4ef1-b2da-de55dbab5f73 00:33:04.250 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:33:04.250 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 064f26cd-cece-4ef1-b2da-de55dbab5f73 00:33:04.250 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:33:04.250 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:33:04.251 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3cc1371c-2b0e-4d05-8df1-3169b451ec48 00:33:04.509 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 064f26cd-cece-4ef1-b2da-de55dbab5f73 00:33:04.768 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:33:05.027 00:33:05.027 real 0m16.950s 00:33:05.027 user 0m34.320s 00:33:05.027 sys 0m3.844s 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:33:05.027 ************************************ 00:33:05.027 END TEST lvs_grow_dirty 00:33:05.027 ************************************ 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:33:05.027 nvmf_trace.0 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:05.027 rmmod nvme_tcp 00:33:05.027 rmmod nvme_fabrics 00:33:05.027 rmmod nvme_keyring 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 1900417 ']' 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 1900417 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 1900417 ']' 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 1900417 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:05.027 08:30:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1900417 00:33:05.027 08:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:05.027 08:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:05.027 08:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1900417' 00:33:05.027 killing process with pid 1900417 00:33:05.027 08:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 1900417 00:33:05.027 08:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 1900417 00:33:05.286 08:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:05.286 08:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:33:05.286 08:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@254 -- # local dev 00:33:05.286 08:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # remove_target_ns 00:33:05.286 08:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:05.286 08:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:05.286 08:30:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # delete_main_bridge 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@121 -- # return 0 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:33:07.824 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@274 -- # iptr 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-save 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@548 -- # iptables-restore 00:33:07.825 00:33:07.825 real 0m41.936s 00:33:07.825 user 0m51.985s 00:33:07.825 sys 0m10.378s 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:33:07.825 ************************************ 00:33:07.825 END TEST nvmf_lvs_grow 00:33:07.825 ************************************ 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:07.825 ************************************ 00:33:07.825 START TEST nvmf_bdev_io_wait 00:33:07.825 ************************************ 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:33:07.825 * Looking for test storage... 00:33:07.825 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:07.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.825 --rc genhtml_branch_coverage=1 00:33:07.825 --rc genhtml_function_coverage=1 00:33:07.825 --rc genhtml_legend=1 00:33:07.825 --rc geninfo_all_blocks=1 00:33:07.825 --rc geninfo_unexecuted_blocks=1 00:33:07.825 00:33:07.825 ' 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:07.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.825 --rc genhtml_branch_coverage=1 00:33:07.825 --rc genhtml_function_coverage=1 00:33:07.825 --rc genhtml_legend=1 00:33:07.825 --rc geninfo_all_blocks=1 00:33:07.825 --rc geninfo_unexecuted_blocks=1 00:33:07.825 00:33:07.825 ' 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:07.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.825 --rc genhtml_branch_coverage=1 00:33:07.825 --rc genhtml_function_coverage=1 00:33:07.825 --rc genhtml_legend=1 00:33:07.825 --rc geninfo_all_blocks=1 00:33:07.825 --rc geninfo_unexecuted_blocks=1 00:33:07.825 00:33:07.825 ' 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:07.825 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.825 --rc genhtml_branch_coverage=1 00:33:07.825 --rc genhtml_function_coverage=1 00:33:07.825 --rc genhtml_legend=1 00:33:07.825 --rc geninfo_all_blocks=1 00:33:07.825 --rc geninfo_unexecuted_blocks=1 00:33:07.825 00:33:07.825 ' 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.825 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # xtrace_disable 00:33:07.826 08:30:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # pci_devs=() 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # local -a pci_devs 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # pci_net_devs=() 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # pci_drivers=() 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # local -A pci_drivers 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # net_devs=() 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # local -ga net_devs 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # e810=() 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # local -ga e810 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # x722=() 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # local -ga x722 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # mlx=() 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # local -ga mlx 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:14.399 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:14.399 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:14.399 Found net devices under 0000:86:00.0: cvl_0_0 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:14.399 Found net devices under 0000:86:00.1: cvl_0_1 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # is_hw=yes 00:33:14.399 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@247 -- # create_target_ns 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:33:14.400 10.0.0.1 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:33:14.400 10.0.0.2 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 1 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:14.400 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:14.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:14.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:33:14.401 00:33:14.401 --- 10.0.0.1 ping statistics --- 00:33:14.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.401 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:33:14.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:14.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:33:14.401 00:33:14.401 --- 10.0.0.2 ping statistics --- 00:33:14.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.401 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # return 0 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # return 1 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev= 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@160 -- # return 0 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target0 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:14.401 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # local dev=target1 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # return 1 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@159 -- # dev= 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@160 -- # return 0 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:33:14.402 ' 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=1904672 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 1904672 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 1904672 ']' 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:14.402 08:30:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:14.402 [2024-11-20 08:30:27.643395] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:14.402 [2024-11-20 08:30:27.644296] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:33:14.402 [2024-11-20 08:30:27.644330] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:14.402 [2024-11-20 08:30:27.720525] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:14.402 [2024-11-20 08:30:27.765929] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:14.402 [2024-11-20 08:30:27.765964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:14.402 [2024-11-20 08:30:27.765972] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:14.402 [2024-11-20 08:30:27.765979] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:14.402 [2024-11-20 08:30:27.765983] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:14.402 [2024-11-20 08:30:27.770220] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:14.402 [2024-11-20 08:30:27.770258] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:14.402 [2024-11-20 08:30:27.770371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.402 [2024-11-20 08:30:27.770371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:14.402 [2024-11-20 08:30:27.770649] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:14.662 [2024-11-20 08:30:28.575962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:14.662 [2024-11-20 08:30:28.576475] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:33:14.662 [2024-11-20 08:30:28.576492] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:33:14.662 [2024-11-20 08:30:28.576646] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:14.662 [2024-11-20 08:30:28.587023] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:14.662 Malloc0 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:14.662 [2024-11-20 08:30:28.655105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=1904738 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=1904741 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:14.662 { 00:33:14.662 "params": { 00:33:14.662 "name": "Nvme$subsystem", 00:33:14.662 "trtype": "$TEST_TRANSPORT", 00:33:14.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:14.662 "adrfam": "ipv4", 00:33:14.662 "trsvcid": "$NVMF_PORT", 00:33:14.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:14.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:14.662 "hdgst": ${hdgst:-false}, 00:33:14.662 "ddgst": ${ddgst:-false} 00:33:14.662 }, 00:33:14.662 "method": "bdev_nvme_attach_controller" 00:33:14.662 } 00:33:14.662 EOF 00:33:14.662 )") 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=1904743 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:14.662 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:14.662 { 00:33:14.662 "params": { 00:33:14.662 "name": "Nvme$subsystem", 00:33:14.662 "trtype": "$TEST_TRANSPORT", 00:33:14.662 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:14.662 "adrfam": "ipv4", 00:33:14.662 "trsvcid": "$NVMF_PORT", 00:33:14.662 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:14.662 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:14.662 "hdgst": ${hdgst:-false}, 00:33:14.662 "ddgst": ${ddgst:-false} 00:33:14.662 }, 00:33:14.662 "method": "bdev_nvme_attach_controller" 00:33:14.662 } 00:33:14.662 EOF 00:33:14.663 )") 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=1904746 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:14.663 { 00:33:14.663 "params": { 00:33:14.663 "name": "Nvme$subsystem", 00:33:14.663 "trtype": "$TEST_TRANSPORT", 00:33:14.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:14.663 "adrfam": "ipv4", 00:33:14.663 "trsvcid": "$NVMF_PORT", 00:33:14.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:14.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:14.663 "hdgst": ${hdgst:-false}, 00:33:14.663 "ddgst": ${ddgst:-false} 00:33:14.663 }, 00:33:14.663 "method": "bdev_nvme_attach_controller" 00:33:14.663 } 00:33:14.663 EOF 00:33:14.663 )") 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:14.663 { 00:33:14.663 "params": { 00:33:14.663 "name": "Nvme$subsystem", 00:33:14.663 "trtype": "$TEST_TRANSPORT", 00:33:14.663 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:14.663 "adrfam": "ipv4", 00:33:14.663 "trsvcid": "$NVMF_PORT", 00:33:14.663 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:14.663 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:14.663 "hdgst": ${hdgst:-false}, 00:33:14.663 "ddgst": ${ddgst:-false} 00:33:14.663 }, 00:33:14.663 "method": "bdev_nvme_attach_controller" 00:33:14.663 } 00:33:14.663 EOF 00:33:14.663 )") 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 1904738 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:33:14.663 "params": { 00:33:14.663 "name": "Nvme1", 00:33:14.663 "trtype": "tcp", 00:33:14.663 "traddr": "10.0.0.2", 00:33:14.663 "adrfam": "ipv4", 00:33:14.663 "trsvcid": "4420", 00:33:14.663 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:14.663 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:14.663 "hdgst": false, 00:33:14.663 "ddgst": false 00:33:14.663 }, 00:33:14.663 "method": "bdev_nvme_attach_controller" 00:33:14.663 }' 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:33:14.663 "params": { 00:33:14.663 "name": "Nvme1", 00:33:14.663 "trtype": "tcp", 00:33:14.663 "traddr": "10.0.0.2", 00:33:14.663 "adrfam": "ipv4", 00:33:14.663 "trsvcid": "4420", 00:33:14.663 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:14.663 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:14.663 "hdgst": false, 00:33:14.663 "ddgst": false 00:33:14.663 }, 00:33:14.663 "method": "bdev_nvme_attach_controller" 00:33:14.663 }' 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:33:14.663 "params": { 00:33:14.663 "name": "Nvme1", 00:33:14.663 "trtype": "tcp", 00:33:14.663 "traddr": "10.0.0.2", 00:33:14.663 "adrfam": "ipv4", 00:33:14.663 "trsvcid": "4420", 00:33:14.663 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:14.663 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:14.663 "hdgst": false, 00:33:14.663 "ddgst": false 00:33:14.663 }, 00:33:14.663 "method": "bdev_nvme_attach_controller" 00:33:14.663 }' 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:33:14.663 08:30:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:33:14.663 "params": { 00:33:14.663 "name": "Nvme1", 00:33:14.663 "trtype": "tcp", 00:33:14.663 "traddr": "10.0.0.2", 00:33:14.663 "adrfam": "ipv4", 00:33:14.663 "trsvcid": "4420", 00:33:14.663 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:14.663 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:14.663 "hdgst": false, 00:33:14.663 "ddgst": false 00:33:14.663 }, 00:33:14.663 "method": "bdev_nvme_attach_controller" 00:33:14.663 }' 00:33:14.922 [2024-11-20 08:30:28.706450] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:33:14.922 [2024-11-20 08:30:28.706493] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:33:14.922 [2024-11-20 08:30:28.708116] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:33:14.922 [2024-11-20 08:30:28.708124] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:33:14.922 [2024-11-20 08:30:28.708169] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 08:30:28.708170] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:33:14.922 --proc-type=auto ] 00:33:14.922 [2024-11-20 08:30:28.708348] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:33:14.922 [2024-11-20 08:30:28.708388] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:33:14.922 [2024-11-20 08:30:28.905726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.180 [2024-11-20 08:30:28.948368] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:33:15.180 [2024-11-20 08:30:29.000701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.181 [2024-11-20 08:30:29.041049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:33:15.181 [2024-11-20 08:30:29.084524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.181 [2024-11-20 08:30:29.137461] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:33:15.181 [2024-11-20 08:30:29.154660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:15.181 [2024-11-20 08:30:29.196857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:33:15.439 Running I/O for 1 seconds... 00:33:15.439 Running I/O for 1 seconds... 00:33:15.439 Running I/O for 1 seconds... 00:33:15.439 Running I/O for 1 seconds... 00:33:16.376 14175.00 IOPS, 55.37 MiB/s 00:33:16.376 Latency(us) 00:33:16.376 [2024-11-20T07:30:30.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.376 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:33:16.376 Nvme1n1 : 1.01 14238.86 55.62 0.00 0.00 8963.81 1552.58 11671.65 00:33:16.376 [2024-11-20T07:30:30.404Z] =================================================================================================================== 00:33:16.376 [2024-11-20T07:30:30.404Z] Total : 14238.86 55.62 0.00 0.00 8963.81 1552.58 11671.65 00:33:16.376 7175.00 IOPS, 28.03 MiB/s 00:33:16.376 Latency(us) 00:33:16.376 [2024-11-20T07:30:30.404Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.376 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:33:16.376 Nvme1n1 : 1.01 7211.24 28.17 0.00 0.00 17635.90 1856.85 29959.31 00:33:16.376 [2024-11-20T07:30:30.404Z] =================================================================================================================== 00:33:16.376 [2024-11-20T07:30:30.404Z] Total : 7211.24 28.17 0.00 0.00 17635.90 1856.85 29959.31 00:33:16.376 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 1904741 00:33:16.635 249352.00 IOPS, 974.03 MiB/s 00:33:16.635 Latency(us) 00:33:16.635 [2024-11-20T07:30:30.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.635 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:33:16.635 Nvme1n1 : 1.00 248967.63 972.53 0.00 0.00 511.52 234.06 1529.17 00:33:16.635 [2024-11-20T07:30:30.663Z] =================================================================================================================== 00:33:16.635 [2024-11-20T07:30:30.663Z] Total : 248967.63 972.53 0.00 0.00 511.52 234.06 1529.17 00:33:16.635 7617.00 IOPS, 29.75 MiB/s 00:33:16.635 Latency(us) 00:33:16.635 [2024-11-20T07:30:30.663Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:16.635 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:33:16.635 Nvme1n1 : 1.01 7707.80 30.11 0.00 0.00 16561.43 4150.61 34952.53 00:33:16.635 [2024-11-20T07:30:30.663Z] =================================================================================================================== 00:33:16.635 [2024-11-20T07:30:30.663Z] Total : 7707.80 30.11 0.00 0.00 16561.43 4150.61 34952.53 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 1904743 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 1904746 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:16.635 rmmod nvme_tcp 00:33:16.635 rmmod nvme_fabrics 00:33:16.635 rmmod nvme_keyring 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 1904672 ']' 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 1904672 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 1904672 ']' 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 1904672 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:16.635 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1904672 00:33:16.895 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:16.895 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:16.895 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1904672' 00:33:16.895 killing process with pid 1904672 00:33:16.895 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 1904672 00:33:16.895 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 1904672 00:33:16.895 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:16.895 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:33:16.895 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@254 -- # local dev 00:33:16.895 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # remove_target_ns 00:33:16.895 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:16.895 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:16.895 08:30:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # delete_main_bridge 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@121 -- # return 0 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@274 -- # iptr 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-save 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@548 -- # iptables-restore 00:33:19.428 00:33:19.428 real 0m11.565s 00:33:19.428 user 0m15.379s 00:33:19.428 sys 0m6.522s 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:33:19.428 ************************************ 00:33:19.428 END TEST nvmf_bdev_io_wait 00:33:19.428 ************************************ 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:19.428 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:19.429 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:19.429 ************************************ 00:33:19.429 START TEST nvmf_queue_depth 00:33:19.429 ************************************ 00:33:19.429 08:30:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:33:19.429 * Looking for test storage... 00:33:19.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:19.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.429 --rc genhtml_branch_coverage=1 00:33:19.429 --rc genhtml_function_coverage=1 00:33:19.429 --rc genhtml_legend=1 00:33:19.429 --rc geninfo_all_blocks=1 00:33:19.429 --rc geninfo_unexecuted_blocks=1 00:33:19.429 00:33:19.429 ' 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:19.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.429 --rc genhtml_branch_coverage=1 00:33:19.429 --rc genhtml_function_coverage=1 00:33:19.429 --rc genhtml_legend=1 00:33:19.429 --rc geninfo_all_blocks=1 00:33:19.429 --rc geninfo_unexecuted_blocks=1 00:33:19.429 00:33:19.429 ' 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:19.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.429 --rc genhtml_branch_coverage=1 00:33:19.429 --rc genhtml_function_coverage=1 00:33:19.429 --rc genhtml_legend=1 00:33:19.429 --rc geninfo_all_blocks=1 00:33:19.429 --rc geninfo_unexecuted_blocks=1 00:33:19.429 00:33:19.429 ' 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:19.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:19.429 --rc genhtml_branch_coverage=1 00:33:19.429 --rc genhtml_function_coverage=1 00:33:19.429 --rc genhtml_legend=1 00:33:19.429 --rc geninfo_all_blocks=1 00:33:19.429 --rc geninfo_unexecuted_blocks=1 00:33:19.429 00:33:19.429 ' 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # xtrace_disable 00:33:19.429 08:30:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@131 -- # pci_devs=() 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@131 -- # local -a pci_devs 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@132 -- # pci_net_devs=() 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@133 -- # pci_drivers=() 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@133 -- # local -A pci_drivers 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@135 -- # net_devs=() 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@135 -- # local -ga net_devs 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@136 -- # e810=() 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@136 -- # local -ga e810 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@137 -- # x722=() 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@137 -- # local -ga x722 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@138 -- # mlx=() 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@138 -- # local -ga mlx 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:26.003 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:26.003 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:26.003 Found net devices under 0000:86:00.0: cvl_0_0 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:26.003 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:26.004 Found net devices under 0000:86:00.1: cvl_0_1 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # is_hw=yes 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@247 -- # create_target_ns 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:33:26.004 10.0.0.1 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:33:26.004 10.0.0.2 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:33:26.004 08:30:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 1 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:33:26.004 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:26.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:26.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.423 ms 00:33:26.005 00:33:26.005 --- 10.0.0.1 ping statistics --- 00:33:26.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:26.005 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:33:26.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:26.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:33:26.005 00:33:26.005 --- 10.0.0.2 ping statistics --- 00:33:26.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:26.005 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@270 -- # return 0 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # return 1 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev= 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@160 -- # return 0 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target0 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:26.005 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # local dev=target1 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # return 1 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@159 -- # dev= 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@160 -- # return 0 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:33:26.006 ' 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=1908705 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 1908705 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1908705 ']' 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:26.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.006 [2024-11-20 08:30:39.302222] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:26.006 [2024-11-20 08:30:39.303183] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:33:26.006 [2024-11-20 08:30:39.303241] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:26.006 [2024-11-20 08:30:39.382776] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.006 [2024-11-20 08:30:39.423297] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:26.006 [2024-11-20 08:30:39.423334] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:26.006 [2024-11-20 08:30:39.423342] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:26.006 [2024-11-20 08:30:39.423348] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:26.006 [2024-11-20 08:30:39.423353] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:26.006 [2024-11-20 08:30:39.423858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:26.006 [2024-11-20 08:30:39.488751] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:26.006 [2024-11-20 08:30:39.488957] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.006 [2024-11-20 08:30:39.556572] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.006 Malloc0 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.006 [2024-11-20 08:30:39.628654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=1908780 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 1908780 /var/tmp/bdevperf.sock 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 1908780 ']' 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:26.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:26.006 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.006 [2024-11-20 08:30:39.680068] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:33:26.006 [2024-11-20 08:30:39.680109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1908780 ] 00:33:26.007 [2024-11-20 08:30:39.755139] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:26.007 [2024-11-20 08:30:39.797238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:26.007 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:26.007 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:33:26.007 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:26.007 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:26.007 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:26.007 NVMe0n1 00:33:26.007 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:26.007 08:30:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:33:26.266 Running I/O for 10 seconds... 00:33:28.140 11547.00 IOPS, 45.11 MiB/s [2024-11-20T07:30:43.106Z] 12097.00 IOPS, 47.25 MiB/s [2024-11-20T07:30:44.489Z] 12269.33 IOPS, 47.93 MiB/s [2024-11-20T07:30:45.426Z] 12314.50 IOPS, 48.10 MiB/s [2024-11-20T07:30:46.363Z] 12439.00 IOPS, 48.59 MiB/s [2024-11-20T07:30:47.300Z] 12450.00 IOPS, 48.63 MiB/s [2024-11-20T07:30:48.238Z] 12474.57 IOPS, 48.73 MiB/s [2024-11-20T07:30:49.175Z] 12523.88 IOPS, 48.92 MiB/s [2024-11-20T07:30:50.112Z] 12518.78 IOPS, 48.90 MiB/s [2024-11-20T07:30:50.371Z] 12554.20 IOPS, 49.04 MiB/s 00:33:36.343 Latency(us) 00:33:36.343 [2024-11-20T07:30:50.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.343 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:33:36.343 Verification LBA range: start 0x0 length 0x4000 00:33:36.343 NVMe0n1 : 10.10 12524.26 48.92 0.00 0.00 81147.06 20222.54 64911.85 00:33:36.343 [2024-11-20T07:30:50.371Z] =================================================================================================================== 00:33:36.343 [2024-11-20T07:30:50.371Z] Total : 12524.26 48.92 0.00 0.00 81147.06 20222.54 64911.85 00:33:36.343 { 00:33:36.343 "results": [ 00:33:36.343 { 00:33:36.343 "job": "NVMe0n1", 00:33:36.343 "core_mask": "0x1", 00:33:36.343 "workload": "verify", 00:33:36.343 "status": "finished", 00:33:36.343 "verify_range": { 00:33:36.343 "start": 0, 00:33:36.343 "length": 16384 00:33:36.343 }, 00:33:36.343 "queue_depth": 1024, 00:33:36.343 "io_size": 4096, 00:33:36.343 "runtime": 10.103352, 00:33:36.343 "iops": 12524.259275535485, 00:33:36.343 "mibps": 48.92288779506049, 00:33:36.343 "io_failed": 0, 00:33:36.343 "io_timeout": 0, 00:33:36.343 "avg_latency_us": 81147.05793276349, 00:33:36.343 "min_latency_us": 20222.53714285714, 00:33:36.343 "max_latency_us": 64911.84761904762 00:33:36.343 } 00:33:36.343 ], 00:33:36.343 "core_count": 1 00:33:36.343 } 00:33:36.343 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 1908780 00:33:36.343 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1908780 ']' 00:33:36.343 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1908780 00:33:36.343 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:36.343 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:36.343 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1908780 00:33:36.343 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:36.343 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:36.343 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1908780' 00:33:36.343 killing process with pid 1908780 00:33:36.343 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1908780 00:33:36.343 Received shutdown signal, test time was about 10.000000 seconds 00:33:36.343 00:33:36.343 Latency(us) 00:33:36.343 [2024-11-20T07:30:50.371Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.343 [2024-11-20T07:30:50.371Z] =================================================================================================================== 00:33:36.343 [2024-11-20T07:30:50.371Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:36.343 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1908780 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:36.603 rmmod nvme_tcp 00:33:36.603 rmmod nvme_fabrics 00:33:36.603 rmmod nvme_keyring 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 1908705 ']' 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 1908705 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 1908705 ']' 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 1908705 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1908705 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1908705' 00:33:36.603 killing process with pid 1908705 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 1908705 00:33:36.603 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 1908705 00:33:36.862 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:36.862 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:33:36.862 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@254 -- # local dev 00:33:36.862 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@257 -- # remove_target_ns 00:33:36.862 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:36.862 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:36.862 08:30:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@258 -- # delete_main_bridge 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@121 -- # return 0 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@274 -- # iptr 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-save 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@548 -- # iptables-restore 00:33:38.812 00:33:38.812 real 0m19.796s 00:33:38.812 user 0m22.651s 00:33:38.812 sys 0m6.428s 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:33:38.812 ************************************ 00:33:38.812 END TEST nvmf_queue_depth 00:33:38.812 ************************************ 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:38.812 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:39.110 ************************************ 00:33:39.110 START TEST nvmf_target_multipath 00:33:39.110 ************************************ 00:33:39.110 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:33:39.110 * Looking for test storage... 00:33:39.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:39.110 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:39.111 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lcov --version 00:33:39.111 08:30:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:39.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.111 --rc genhtml_branch_coverage=1 00:33:39.111 --rc genhtml_function_coverage=1 00:33:39.111 --rc genhtml_legend=1 00:33:39.111 --rc geninfo_all_blocks=1 00:33:39.111 --rc geninfo_unexecuted_blocks=1 00:33:39.111 00:33:39.111 ' 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:39.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.111 --rc genhtml_branch_coverage=1 00:33:39.111 --rc genhtml_function_coverage=1 00:33:39.111 --rc genhtml_legend=1 00:33:39.111 --rc geninfo_all_blocks=1 00:33:39.111 --rc geninfo_unexecuted_blocks=1 00:33:39.111 00:33:39.111 ' 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:39.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.111 --rc genhtml_branch_coverage=1 00:33:39.111 --rc genhtml_function_coverage=1 00:33:39.111 --rc genhtml_legend=1 00:33:39.111 --rc geninfo_all_blocks=1 00:33:39.111 --rc geninfo_unexecuted_blocks=1 00:33:39.111 00:33:39.111 ' 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:39.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:39.111 --rc genhtml_branch_coverage=1 00:33:39.111 --rc genhtml_function_coverage=1 00:33:39.111 --rc genhtml_legend=1 00:33:39.111 --rc geninfo_all_blocks=1 00:33:39.111 --rc geninfo_unexecuted_blocks=1 00:33:39.111 00:33:39.111 ' 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.111 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@50 -- # : 0 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@260 -- # remove_target_ns 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # xtrace_disable 00:33:39.112 08:30:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@131 -- # pci_devs=() 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@131 -- # local -a pci_devs 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@132 -- # pci_net_devs=() 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@133 -- # pci_drivers=() 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@133 -- # local -A pci_drivers 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@135 -- # net_devs=() 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@135 -- # local -ga net_devs 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@136 -- # e810=() 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@136 -- # local -ga e810 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@137 -- # x722=() 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@137 -- # local -ga x722 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@138 -- # mlx=() 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@138 -- # local -ga mlx 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:33:45.683 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:45.684 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:45.684 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:45.684 Found net devices under 0000:86:00.0: cvl_0_0 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:45.684 Found net devices under 0000:86:00.1: cvl_0_1 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # is_hw=yes 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@247 -- # create_target_ns 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@28 -- # local -g _dev 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@44 -- # ips=() 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:45.684 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772161 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:33:45.685 10.0.0.1 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@11 -- # local val=167772162 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:33:45.685 10.0.0.2 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@38 -- # ping_ips 1 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:45.685 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:45.685 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.499 ms 00:33:45.685 00:33:45.685 --- 10.0.0.1 ping statistics --- 00:33:45.685 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.685 rtt min/avg/max/mdev = 0.499/0.499/0.499/0.000 ms 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:45.685 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:33:45.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:45.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:33:45.686 00:33:45.686 --- 10.0.0.2 ping statistics --- 00:33:45.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:45.686 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@270 -- # return 0 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:45.686 08:30:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # return 1 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev= 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@160 -- # return 0 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target0 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:33:45.686 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@98 -- # local dev=target1 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@100 -- # return 1 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@159 -- # dev= 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@160 -- # return 0 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:33:45.687 ' 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:33:45.687 only one NIC for nvmf test 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:45.687 rmmod nvme_tcp 00:33:45.687 rmmod nvme_fabrics 00:33:45.687 rmmod nvme_keyring 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:45.687 08:30:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@99 -- # sync 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:47.593 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@102 -- # set +e 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@106 -- # set -e 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@107 -- # return 0 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@342 -- # nvmf_fini 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@254 -- # local dev 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@257 -- # remove_target_ns 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@258 -- # delete_main_bridge 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@121 -- # return 0 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@41 -- # _dev=0 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@41 -- # dev_map=() 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/setup.sh@274 -- # iptr 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-save 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@548 -- # iptables-restore 00:33:47.594 00:33:47.594 real 0m8.409s 00:33:47.594 user 0m1.958s 00:33:47.594 sys 0m4.455s 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:33:47.594 ************************************ 00:33:47.594 END TEST nvmf_target_multipath 00:33:47.594 ************************************ 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:33:47.594 ************************************ 00:33:47.594 START TEST nvmf_zcopy 00:33:47.594 ************************************ 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:33:47.594 * Looking for test storage... 00:33:47.594 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:47.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.594 --rc genhtml_branch_coverage=1 00:33:47.594 --rc genhtml_function_coverage=1 00:33:47.594 --rc genhtml_legend=1 00:33:47.594 --rc geninfo_all_blocks=1 00:33:47.594 --rc geninfo_unexecuted_blocks=1 00:33:47.594 00:33:47.594 ' 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:47.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.594 --rc genhtml_branch_coverage=1 00:33:47.594 --rc genhtml_function_coverage=1 00:33:47.594 --rc genhtml_legend=1 00:33:47.594 --rc geninfo_all_blocks=1 00:33:47.594 --rc geninfo_unexecuted_blocks=1 00:33:47.594 00:33:47.594 ' 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:47.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.594 --rc genhtml_branch_coverage=1 00:33:47.594 --rc genhtml_function_coverage=1 00:33:47.594 --rc genhtml_legend=1 00:33:47.594 --rc geninfo_all_blocks=1 00:33:47.594 --rc geninfo_unexecuted_blocks=1 00:33:47.594 00:33:47.594 ' 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:47.594 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:47.594 --rc genhtml_branch_coverage=1 00:33:47.594 --rc genhtml_function_coverage=1 00:33:47.594 --rc genhtml_legend=1 00:33:47.594 --rc geninfo_all_blocks=1 00:33:47.594 --rc geninfo_unexecuted_blocks=1 00:33:47.594 00:33:47.594 ' 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:47.594 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # xtrace_disable 00:33:47.595 08:31:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@131 -- # pci_devs=() 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@131 -- # local -a pci_devs 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@132 -- # pci_net_devs=() 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@133 -- # pci_drivers=() 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@133 -- # local -A pci_drivers 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@135 -- # net_devs=() 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@135 -- # local -ga net_devs 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@136 -- # e810=() 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@136 -- # local -ga e810 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@137 -- # x722=() 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@137 -- # local -ga x722 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@138 -- # mlx=() 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@138 -- # local -ga mlx 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:54.168 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:54.168 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:54.168 Found net devices under 0000:86:00.0: cvl_0_0 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:54.168 Found net devices under 0000:86:00.1: cvl_0_1 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # is_hw=yes 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@247 -- # create_target_ns 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:33:54.168 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:33:54.169 10.0.0.1 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:33:54.169 10.0.0.2 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 1 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:54.169 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:54.169 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.485 ms 00:33:54.169 00:33:54.169 --- 10.0.0.1 ping statistics --- 00:33:54.169 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.169 rtt min/avg/max/mdev = 0.485/0.485/0.485/0.000 ms 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:33:54.169 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:33:54.170 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:54.170 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:33:54.170 00:33:54.170 --- 10.0.0.2 ping statistics --- 00:33:54.170 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:54.170 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair++ )) 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@270 -- # return 0 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator0 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=initiator1 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # return 1 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev= 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@160 -- # return 0 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target0 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target0 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # get_net_dev target1 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # local dev=target1 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # return 1 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@159 -- # dev= 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@160 -- # return 0 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:33:54.170 ' 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=1917468 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 1917468 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 1917468 ']' 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:54.170 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:54.170 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:54.171 [2024-11-20 08:31:07.608177] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:33:54.171 [2024-11-20 08:31:07.609133] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:33:54.171 [2024-11-20 08:31:07.609171] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:54.171 [2024-11-20 08:31:07.686464] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.171 [2024-11-20 08:31:07.727553] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:54.171 [2024-11-20 08:31:07.727588] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:54.171 [2024-11-20 08:31:07.727595] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:54.171 [2024-11-20 08:31:07.727601] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:54.171 [2024-11-20 08:31:07.727606] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:54.171 [2024-11-20 08:31:07.728128] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:54.171 [2024-11-20 08:31:07.794127] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:33:54.171 [2024-11-20 08:31:07.794336] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:54.171 [2024-11-20 08:31:07.860798] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:54.171 [2024-11-20 08:31:07.885011] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:54.171 malloc0 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:54.171 { 00:33:54.171 "params": { 00:33:54.171 "name": "Nvme$subsystem", 00:33:54.171 "trtype": "$TEST_TRANSPORT", 00:33:54.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:54.171 "adrfam": "ipv4", 00:33:54.171 "trsvcid": "$NVMF_PORT", 00:33:54.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:54.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:54.171 "hdgst": ${hdgst:-false}, 00:33:54.171 "ddgst": ${ddgst:-false} 00:33:54.171 }, 00:33:54.171 "method": "bdev_nvme_attach_controller" 00:33:54.171 } 00:33:54.171 EOF 00:33:54.171 )") 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:33:54.171 08:31:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:33:54.171 "params": { 00:33:54.171 "name": "Nvme1", 00:33:54.171 "trtype": "tcp", 00:33:54.171 "traddr": "10.0.0.2", 00:33:54.171 "adrfam": "ipv4", 00:33:54.171 "trsvcid": "4420", 00:33:54.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:54.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:54.171 "hdgst": false, 00:33:54.171 "ddgst": false 00:33:54.171 }, 00:33:54.171 "method": "bdev_nvme_attach_controller" 00:33:54.171 }' 00:33:54.171 [2024-11-20 08:31:07.977761] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:33:54.171 [2024-11-20 08:31:07.977813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1917489 ] 00:33:54.171 [2024-11-20 08:31:08.052574] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:54.171 [2024-11-20 08:31:08.095420] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:54.430 Running I/O for 10 seconds... 00:33:56.755 8506.00 IOPS, 66.45 MiB/s [2024-11-20T07:31:11.719Z] 8553.50 IOPS, 66.82 MiB/s [2024-11-20T07:31:12.656Z] 8555.67 IOPS, 66.84 MiB/s [2024-11-20T07:31:13.593Z] 8576.50 IOPS, 67.00 MiB/s [2024-11-20T07:31:14.531Z] 8586.00 IOPS, 67.08 MiB/s [2024-11-20T07:31:15.467Z] 8600.17 IOPS, 67.19 MiB/s [2024-11-20T07:31:16.844Z] 8601.43 IOPS, 67.20 MiB/s [2024-11-20T07:31:17.781Z] 8603.25 IOPS, 67.21 MiB/s [2024-11-20T07:31:18.718Z] 8603.11 IOPS, 67.21 MiB/s [2024-11-20T07:31:18.718Z] 8598.30 IOPS, 67.17 MiB/s 00:34:04.690 Latency(us) 00:34:04.690 [2024-11-20T07:31:18.718Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:04.690 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:34:04.690 Verification LBA range: start 0x0 length 0x1000 00:34:04.691 Nvme1n1 : 10.01 8602.43 67.21 0.00 0.00 14837.83 421.30 20971.52 00:34:04.691 [2024-11-20T07:31:18.719Z] =================================================================================================================== 00:34:04.691 [2024-11-20T07:31:18.719Z] Total : 8602.43 67.21 0.00 0.00 14837.83 421.30 20971.52 00:34:04.691 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=1919286 00:34:04.691 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:34:04.691 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:04.691 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:34:04.691 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:34:04.691 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:34:04.691 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:34:04.691 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:34:04.691 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:34:04.691 { 00:34:04.691 "params": { 00:34:04.691 "name": "Nvme$subsystem", 00:34:04.691 "trtype": "$TEST_TRANSPORT", 00:34:04.691 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:04.691 "adrfam": "ipv4", 00:34:04.691 "trsvcid": "$NVMF_PORT", 00:34:04.691 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:04.691 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:04.691 "hdgst": ${hdgst:-false}, 00:34:04.691 "ddgst": ${ddgst:-false} 00:34:04.691 }, 00:34:04.691 "method": "bdev_nvme_attach_controller" 00:34:04.691 } 00:34:04.691 EOF 00:34:04.691 )") 00:34:04.691 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:34:04.691 [2024-11-20 08:31:18.604473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.691 [2024-11-20 08:31:18.604501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.691 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:34:04.691 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:34:04.691 08:31:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:34:04.691 "params": { 00:34:04.691 "name": "Nvme1", 00:34:04.691 "trtype": "tcp", 00:34:04.691 "traddr": "10.0.0.2", 00:34:04.691 "adrfam": "ipv4", 00:34:04.691 "trsvcid": "4420", 00:34:04.691 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:04.691 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:04.691 "hdgst": false, 00:34:04.691 "ddgst": false 00:34:04.691 }, 00:34:04.691 "method": "bdev_nvme_attach_controller" 00:34:04.691 }' 00:34:04.691 [2024-11-20 08:31:18.616434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.691 [2024-11-20 08:31:18.616447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.691 [2024-11-20 08:31:18.628432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.691 [2024-11-20 08:31:18.628442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.691 [2024-11-20 08:31:18.640431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.691 [2024-11-20 08:31:18.640442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.691 [2024-11-20 08:31:18.644283] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:34:04.691 [2024-11-20 08:31:18.644323] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1919286 ] 00:34:04.691 [2024-11-20 08:31:18.652431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.691 [2024-11-20 08:31:18.652443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.691 [2024-11-20 08:31:18.664430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.691 [2024-11-20 08:31:18.664441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.691 [2024-11-20 08:31:18.676430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.691 [2024-11-20 08:31:18.676441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.691 [2024-11-20 08:31:18.688429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.691 [2024-11-20 08:31:18.688439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.691 [2024-11-20 08:31:18.700428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.691 [2024-11-20 08:31:18.700438] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.691 [2024-11-20 08:31:18.712433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.691 [2024-11-20 08:31:18.712442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.718145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:04.951 [2024-11-20 08:31:18.724431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.724442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.736450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.736468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.748431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.748442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.759905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.951 [2024-11-20 08:31:18.760446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.760464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.772443] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.772458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.784438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.784458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.796434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.796449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.808431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.808443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.820444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.820457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.832432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.832444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.844441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.844470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.856437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.856452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.868441] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.868467] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.880434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.880449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.892431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.892441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.904430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.904440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.916430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.916439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.928434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.928448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.940433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.940443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.952431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.952445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:04.951 [2024-11-20 08:31:18.964439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:04.951 [2024-11-20 08:31:18.964450] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.210 [2024-11-20 08:31:18.976433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.210 [2024-11-20 08:31:18.976446] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.210 [2024-11-20 08:31:18.988431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.210 [2024-11-20 08:31:18.988440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.210 [2024-11-20 08:31:19.000430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.211 [2024-11-20 08:31:19.000440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.211 [2024-11-20 08:31:19.012432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.211 [2024-11-20 08:31:19.012444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.211 [2024-11-20 08:31:19.024438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.211 [2024-11-20 08:31:19.024466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.211 Running I/O for 5 seconds... 00:34:05.211 [2024-11-20 08:31:19.042499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.211 [2024-11-20 08:31:19.042519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.211 [2024-11-20 08:31:19.057468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.211 [2024-11-20 08:31:19.057487] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.211 [2024-11-20 08:31:19.072948] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.211 [2024-11-20 08:31:19.072967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.211 [2024-11-20 08:31:19.085331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.211 [2024-11-20 08:31:19.085350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.211 [2024-11-20 08:31:19.100983] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.211 [2024-11-20 08:31:19.101002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.211 [2024-11-20 08:31:19.117104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.211 [2024-11-20 08:31:19.117123] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.211 [2024-11-20 08:31:19.133026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.211 [2024-11-20 08:31:19.133044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.211 [2024-11-20 08:31:19.145075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.211 [2024-11-20 08:31:19.145093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.211 [2024-11-20 08:31:19.157719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.211 [2024-11-20 08:31:19.157738] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.211 [2024-11-20 08:31:19.172403] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.211 [2024-11-20 08:31:19.172422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.211 [2024-11-20 08:31:19.184534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.211 [2024-11-20 08:31:19.184554] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.211 [2024-11-20 08:31:19.197900] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.211 [2024-11-20 08:31:19.197920] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.211 [2024-11-20 08:31:19.212839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.211 [2024-11-20 08:31:19.212858] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.211 [2024-11-20 08:31:19.227910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.211 [2024-11-20 08:31:19.227929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.469 [2024-11-20 08:31:19.243184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.469 [2024-11-20 08:31:19.243212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.469 [2024-11-20 08:31:19.257808] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.469 [2024-11-20 08:31:19.257826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.469 [2024-11-20 08:31:19.273046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.469 [2024-11-20 08:31:19.273065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.469 [2024-11-20 08:31:19.288774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.469 [2024-11-20 08:31:19.288792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.469 [2024-11-20 08:31:19.301219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.469 [2024-11-20 08:31:19.301254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.469 [2024-11-20 08:31:19.316319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.469 [2024-11-20 08:31:19.316338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.469 [2024-11-20 08:31:19.329628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.469 [2024-11-20 08:31:19.329647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.469 [2024-11-20 08:31:19.340809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.469 [2024-11-20 08:31:19.340828] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.469 [2024-11-20 08:31:19.354304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.469 [2024-11-20 08:31:19.354324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.469 [2024-11-20 08:31:19.369488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.469 [2024-11-20 08:31:19.369507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.469 [2024-11-20 08:31:19.384276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.469 [2024-11-20 08:31:19.384296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.469 [2024-11-20 08:31:19.397679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.469 [2024-11-20 08:31:19.397698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.469 [2024-11-20 08:31:19.412528] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.469 [2024-11-20 08:31:19.412547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.469 [2024-11-20 08:31:19.425652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.469 [2024-11-20 08:31:19.425671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.469 [2024-11-20 08:31:19.441299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.469 [2024-11-20 08:31:19.441318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.469 [2024-11-20 08:31:19.456040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.469 [2024-11-20 08:31:19.456060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.469 [2024-11-20 08:31:19.470230] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.469 [2024-11-20 08:31:19.470249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.469 [2024-11-20 08:31:19.485196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.469 [2024-11-20 08:31:19.485223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.728 [2024-11-20 08:31:19.496774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.728 [2024-11-20 08:31:19.496793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.728 [2024-11-20 08:31:19.510188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.728 [2024-11-20 08:31:19.510211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.728 [2024-11-20 08:31:19.524984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.728 [2024-11-20 08:31:19.525003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.728 [2024-11-20 08:31:19.540667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.728 [2024-11-20 08:31:19.540686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.728 [2024-11-20 08:31:19.552616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.728 [2024-11-20 08:31:19.552635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.728 [2024-11-20 08:31:19.566142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.728 [2024-11-20 08:31:19.566160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.728 [2024-11-20 08:31:19.581269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.728 [2024-11-20 08:31:19.581288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.728 [2024-11-20 08:31:19.596845] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.728 [2024-11-20 08:31:19.596863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.728 [2024-11-20 08:31:19.609232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.728 [2024-11-20 08:31:19.609251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.728 [2024-11-20 08:31:19.624783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.728 [2024-11-20 08:31:19.624804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.728 [2024-11-20 08:31:19.636305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.729 [2024-11-20 08:31:19.636324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.729 [2024-11-20 08:31:19.650264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.729 [2024-11-20 08:31:19.650283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.729 [2024-11-20 08:31:19.665246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.729 [2024-11-20 08:31:19.665267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.729 [2024-11-20 08:31:19.680423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.729 [2024-11-20 08:31:19.680444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.729 [2024-11-20 08:31:19.694683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.729 [2024-11-20 08:31:19.694703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.729 [2024-11-20 08:31:19.708963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.729 [2024-11-20 08:31:19.708983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.729 [2024-11-20 08:31:19.724765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.729 [2024-11-20 08:31:19.724785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.729 [2024-11-20 08:31:19.740399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.729 [2024-11-20 08:31:19.740419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.988 [2024-11-20 08:31:19.754439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.988 [2024-11-20 08:31:19.754460] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.988 [2024-11-20 08:31:19.769351] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.988 [2024-11-20 08:31:19.769371] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.988 [2024-11-20 08:31:19.784568] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.989 [2024-11-20 08:31:19.784587] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.989 [2024-11-20 08:31:19.795893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.989 [2024-11-20 08:31:19.795913] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.989 [2024-11-20 08:31:19.810152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.989 [2024-11-20 08:31:19.810171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.989 [2024-11-20 08:31:19.824825] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.989 [2024-11-20 08:31:19.824844] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.989 [2024-11-20 08:31:19.837228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.989 [2024-11-20 08:31:19.837248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.989 [2024-11-20 08:31:19.852797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.989 [2024-11-20 08:31:19.852817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.989 [2024-11-20 08:31:19.865503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.989 [2024-11-20 08:31:19.865524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.989 [2024-11-20 08:31:19.877888] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.989 [2024-11-20 08:31:19.877909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.989 [2024-11-20 08:31:19.892913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.989 [2024-11-20 08:31:19.892932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.989 [2024-11-20 08:31:19.908686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.989 [2024-11-20 08:31:19.908707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.989 [2024-11-20 08:31:19.921071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.989 [2024-11-20 08:31:19.921090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.989 [2024-11-20 08:31:19.934222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.989 [2024-11-20 08:31:19.934257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.989 [2024-11-20 08:31:19.949320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.989 [2024-11-20 08:31:19.949339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.989 [2024-11-20 08:31:19.964717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.989 [2024-11-20 08:31:19.964736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.989 [2024-11-20 08:31:19.978539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.989 [2024-11-20 08:31:19.978558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.989 [2024-11-20 08:31:19.993181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.989 [2024-11-20 08:31:19.993200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:05.989 [2024-11-20 08:31:20.004706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:05.989 [2024-11-20 08:31:20.004725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.250 [2024-11-20 08:31:20.020307] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.250 [2024-11-20 08:31:20.020327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.250 [2024-11-20 08:31:20.032247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.250 [2024-11-20 08:31:20.032267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.250 16618.00 IOPS, 129.83 MiB/s [2024-11-20T07:31:20.278Z] [2024-11-20 08:31:20.041241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.250 [2024-11-20 08:31:20.041265] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.250 [2024-11-20 08:31:20.058839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.250 [2024-11-20 08:31:20.058863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.250 [2024-11-20 08:31:20.074755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.250 [2024-11-20 08:31:20.074778] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.250 [2024-11-20 08:31:20.092359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.250 [2024-11-20 08:31:20.092381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.250 [2024-11-20 08:31:20.106187] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.250 [2024-11-20 08:31:20.106212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.250 [2024-11-20 08:31:20.121378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.250 [2024-11-20 08:31:20.121397] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.250 [2024-11-20 08:31:20.132234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.250 [2024-11-20 08:31:20.132254] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.250 [2024-11-20 08:31:20.146846] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.250 [2024-11-20 08:31:20.146865] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.250 [2024-11-20 08:31:20.161510] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.250 [2024-11-20 08:31:20.161529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.250 [2024-11-20 08:31:20.172941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.250 [2024-11-20 08:31:20.172959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.250 [2024-11-20 08:31:20.185764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.250 [2024-11-20 08:31:20.185783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.250 [2024-11-20 08:31:20.200860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.250 [2024-11-20 08:31:20.200879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.250 [2024-11-20 08:31:20.216779] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.250 [2024-11-20 08:31:20.216799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.250 [2024-11-20 08:31:20.232469] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.250 [2024-11-20 08:31:20.232489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.250 [2024-11-20 08:31:20.244821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.250 [2024-11-20 08:31:20.244839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.250 [2024-11-20 08:31:20.258340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.250 [2024-11-20 08:31:20.258359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.250 [2024-11-20 08:31:20.273604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.250 [2024-11-20 08:31:20.273628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.510 [2024-11-20 08:31:20.284243] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.510 [2024-11-20 08:31:20.284262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.510 [2024-11-20 08:31:20.298706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.510 [2024-11-20 08:31:20.298724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.510 [2024-11-20 08:31:20.313522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.510 [2024-11-20 08:31:20.313540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.510 [2024-11-20 08:31:20.328221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.510 [2024-11-20 08:31:20.328240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.510 [2024-11-20 08:31:20.342478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.510 [2024-11-20 08:31:20.342497] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.510 [2024-11-20 08:31:20.356875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.510 [2024-11-20 08:31:20.356893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.510 [2024-11-20 08:31:20.372700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.510 [2024-11-20 08:31:20.372720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.510 [2024-11-20 08:31:20.383260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.510 [2024-11-20 08:31:20.383281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.510 [2024-11-20 08:31:20.398532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.510 [2024-11-20 08:31:20.398552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.510 [2024-11-20 08:31:20.412820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.510 [2024-11-20 08:31:20.412838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.510 [2024-11-20 08:31:20.426346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.510 [2024-11-20 08:31:20.426365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.510 [2024-11-20 08:31:20.441102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.510 [2024-11-20 08:31:20.441120] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.510 [2024-11-20 08:31:20.456130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.510 [2024-11-20 08:31:20.456149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.510 [2024-11-20 08:31:20.470215] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.510 [2024-11-20 08:31:20.470233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.510 [2024-11-20 08:31:20.484965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.510 [2024-11-20 08:31:20.484984] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.510 [2024-11-20 08:31:20.501062] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.510 [2024-11-20 08:31:20.501080] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.510 [2024-11-20 08:31:20.516177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.510 [2024-11-20 08:31:20.516197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.510 [2024-11-20 08:31:20.530386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.510 [2024-11-20 08:31:20.530405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.769 [2024-11-20 08:31:20.545255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.769 [2024-11-20 08:31:20.545279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.769 [2024-11-20 08:31:20.560852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.769 [2024-11-20 08:31:20.560871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.769 [2024-11-20 08:31:20.576358] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.769 [2024-11-20 08:31:20.576378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.769 [2024-11-20 08:31:20.590658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.769 [2024-11-20 08:31:20.590677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.769 [2024-11-20 08:31:20.605244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.769 [2024-11-20 08:31:20.605263] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.769 [2024-11-20 08:31:20.620165] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.769 [2024-11-20 08:31:20.620194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.769 [2024-11-20 08:31:20.634893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.769 [2024-11-20 08:31:20.634912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.769 [2024-11-20 08:31:20.649654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.769 [2024-11-20 08:31:20.649673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.769 [2024-11-20 08:31:20.661139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.769 [2024-11-20 08:31:20.661158] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.769 [2024-11-20 08:31:20.674474] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.769 [2024-11-20 08:31:20.674492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.769 [2024-11-20 08:31:20.689922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.769 [2024-11-20 08:31:20.689940] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.769 [2024-11-20 08:31:20.704928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.769 [2024-11-20 08:31:20.704946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.769 [2024-11-20 08:31:20.716025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.769 [2024-11-20 08:31:20.716045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.769 [2024-11-20 08:31:20.730192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.769 [2024-11-20 08:31:20.730218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.769 [2024-11-20 08:31:20.745097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.769 [2024-11-20 08:31:20.745116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.769 [2024-11-20 08:31:20.759606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.769 [2024-11-20 08:31:20.759625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.769 [2024-11-20 08:31:20.774424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.769 [2024-11-20 08:31:20.774443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:06.769 [2024-11-20 08:31:20.789533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:06.769 [2024-11-20 08:31:20.789552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.029 [2024-11-20 08:31:20.804499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.029 [2024-11-20 08:31:20.804518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.029 [2024-11-20 08:31:20.817985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.029 [2024-11-20 08:31:20.818009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.029 [2024-11-20 08:31:20.832761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.029 [2024-11-20 08:31:20.832779] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.029 [2024-11-20 08:31:20.848056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.029 [2024-11-20 08:31:20.848076] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.029 [2024-11-20 08:31:20.862249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.029 [2024-11-20 08:31:20.862267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.029 [2024-11-20 08:31:20.876858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.029 [2024-11-20 08:31:20.876876] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.029 [2024-11-20 08:31:20.892462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.029 [2024-11-20 08:31:20.892482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.029 [2024-11-20 08:31:20.906039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.029 [2024-11-20 08:31:20.906059] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.029 [2024-11-20 08:31:20.920837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.029 [2024-11-20 08:31:20.920856] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.029 [2024-11-20 08:31:20.936266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.029 [2024-11-20 08:31:20.936286] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.029 [2024-11-20 08:31:20.949688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.029 [2024-11-20 08:31:20.949707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.029 [2024-11-20 08:31:20.960237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.029 [2024-11-20 08:31:20.960256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.029 [2024-11-20 08:31:20.974440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.029 [2024-11-20 08:31:20.974461] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.029 [2024-11-20 08:31:20.989346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.029 [2024-11-20 08:31:20.989365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.029 [2024-11-20 08:31:21.004522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.029 [2024-11-20 08:31:21.004540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.029 [2024-11-20 08:31:21.018155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.029 [2024-11-20 08:31:21.018174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.029 [2024-11-20 08:31:21.033588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.029 [2024-11-20 08:31:21.033607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.029 16552.00 IOPS, 129.31 MiB/s [2024-11-20T07:31:21.057Z] [2024-11-20 08:31:21.048614] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.029 [2024-11-20 08:31:21.048633] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.289 [2024-11-20 08:31:21.061327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.289 [2024-11-20 08:31:21.061346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.289 [2024-11-20 08:31:21.073970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.289 [2024-11-20 08:31:21.073988] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.289 [2024-11-20 08:31:21.088913] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.289 [2024-11-20 08:31:21.088933] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.289 [2024-11-20 08:31:21.100401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.289 [2024-11-20 08:31:21.100422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.289 [2024-11-20 08:31:21.114561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.289 [2024-11-20 08:31:21.114580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.289 [2024-11-20 08:31:21.129186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.289 [2024-11-20 08:31:21.129213] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.289 [2024-11-20 08:31:21.140078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.289 [2024-11-20 08:31:21.140098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.289 [2024-11-20 08:31:21.154093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.289 [2024-11-20 08:31:21.154113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.289 [2024-11-20 08:31:21.169738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.289 [2024-11-20 08:31:21.169756] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.289 [2024-11-20 08:31:21.184434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.289 [2024-11-20 08:31:21.184453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.289 [2024-11-20 08:31:21.197785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.289 [2024-11-20 08:31:21.197804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.289 [2024-11-20 08:31:21.212525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.289 [2024-11-20 08:31:21.212543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.289 [2024-11-20 08:31:21.223865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.289 [2024-11-20 08:31:21.223884] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.289 [2024-11-20 08:31:21.238805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.289 [2024-11-20 08:31:21.238824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.289 [2024-11-20 08:31:21.253592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.289 [2024-11-20 08:31:21.253611] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.289 [2024-11-20 08:31:21.268927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.289 [2024-11-20 08:31:21.268947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.289 [2024-11-20 08:31:21.284781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.289 [2024-11-20 08:31:21.284800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.289 [2024-11-20 08:31:21.297949] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.289 [2024-11-20 08:31:21.297969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.549 [2024-11-20 08:31:21.312628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.549 [2024-11-20 08:31:21.312649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.549 [2024-11-20 08:31:21.324147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.549 [2024-11-20 08:31:21.324167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.549 [2024-11-20 08:31:21.338244] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.549 [2024-11-20 08:31:21.338264] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.549 [2024-11-20 08:31:21.352998] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.549 [2024-11-20 08:31:21.353017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.549 [2024-11-20 08:31:21.367986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.549 [2024-11-20 08:31:21.368005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.549 [2024-11-20 08:31:21.381700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.549 [2024-11-20 08:31:21.381719] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.549 [2024-11-20 08:31:21.396384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.549 [2024-11-20 08:31:21.396404] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.549 [2024-11-20 08:31:21.409167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.549 [2024-11-20 08:31:21.409188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.549 [2024-11-20 08:31:21.424211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.549 [2024-11-20 08:31:21.424231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.549 [2024-11-20 08:31:21.438400] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.549 [2024-11-20 08:31:21.438419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.549 [2024-11-20 08:31:21.453422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.549 [2024-11-20 08:31:21.453442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.549 [2024-11-20 08:31:21.468199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.549 [2024-11-20 08:31:21.468226] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.549 [2024-11-20 08:31:21.482436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.549 [2024-11-20 08:31:21.482456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.549 [2024-11-20 08:31:21.497368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.549 [2024-11-20 08:31:21.497387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.549 [2024-11-20 08:31:21.512884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.549 [2024-11-20 08:31:21.512902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.549 [2024-11-20 08:31:21.528372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.549 [2024-11-20 08:31:21.528392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.549 [2024-11-20 08:31:21.541152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.549 [2024-11-20 08:31:21.541171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.549 [2024-11-20 08:31:21.554007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.549 [2024-11-20 08:31:21.554027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.549 [2024-11-20 08:31:21.568805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.549 [2024-11-20 08:31:21.568824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.809 [2024-11-20 08:31:21.583713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.809 [2024-11-20 08:31:21.583733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.809 [2024-11-20 08:31:21.598627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.809 [2024-11-20 08:31:21.598646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.809 [2024-11-20 08:31:21.613309] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.809 [2024-11-20 08:31:21.613332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.809 [2024-11-20 08:31:21.628508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.809 [2024-11-20 08:31:21.628527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.809 [2024-11-20 08:31:21.642439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.809 [2024-11-20 08:31:21.642458] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.809 [2024-11-20 08:31:21.657088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.809 [2024-11-20 08:31:21.657107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.809 [2024-11-20 08:31:21.668840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.809 [2024-11-20 08:31:21.668857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.809 [2024-11-20 08:31:21.682167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.809 [2024-11-20 08:31:21.682186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.809 [2024-11-20 08:31:21.696927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.809 [2024-11-20 08:31:21.696946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.809 [2024-11-20 08:31:21.712412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.809 [2024-11-20 08:31:21.712431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.809 [2024-11-20 08:31:21.725476] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.809 [2024-11-20 08:31:21.725495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.809 [2024-11-20 08:31:21.736717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.809 [2024-11-20 08:31:21.736735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.809 [2024-11-20 08:31:21.749739] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.809 [2024-11-20 08:31:21.749758] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.809 [2024-11-20 08:31:21.764519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.809 [2024-11-20 08:31:21.764538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.810 [2024-11-20 08:31:21.777479] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.810 [2024-11-20 08:31:21.777498] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.810 [2024-11-20 08:31:21.792184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.810 [2024-11-20 08:31:21.792208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.810 [2024-11-20 08:31:21.805167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.810 [2024-11-20 08:31:21.805186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.810 [2024-11-20 08:31:21.817738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.810 [2024-11-20 08:31:21.817757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:07.810 [2024-11-20 08:31:21.828883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:07.810 [2024-11-20 08:31:21.828901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.069 [2024-11-20 08:31:21.842115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.069 [2024-11-20 08:31:21.842136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.069 [2024-11-20 08:31:21.856750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.069 [2024-11-20 08:31:21.856768] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.069 [2024-11-20 08:31:21.868155] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.069 [2024-11-20 08:31:21.868179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.069 [2024-11-20 08:31:21.882598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.069 [2024-11-20 08:31:21.882618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.069 [2024-11-20 08:31:21.897295] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.069 [2024-11-20 08:31:21.897314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.069 [2024-11-20 08:31:21.912818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.069 [2024-11-20 08:31:21.912836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.069 [2024-11-20 08:31:21.926218] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.069 [2024-11-20 08:31:21.926253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.069 [2024-11-20 08:31:21.941343] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.069 [2024-11-20 08:31:21.941362] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.069 [2024-11-20 08:31:21.956486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.069 [2024-11-20 08:31:21.956505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.069 [2024-11-20 08:31:21.970635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.069 [2024-11-20 08:31:21.970654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.069 [2024-11-20 08:31:21.985632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.069 [2024-11-20 08:31:21.985651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.069 [2024-11-20 08:31:22.000683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.069 [2024-11-20 08:31:22.000703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.069 [2024-11-20 08:31:22.013310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.069 [2024-11-20 08:31:22.013328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.069 [2024-11-20 08:31:22.024659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.069 [2024-11-20 08:31:22.024678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.069 [2024-11-20 08:31:22.038289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.069 [2024-11-20 08:31:22.038308] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.069 16599.67 IOPS, 129.68 MiB/s [2024-11-20T07:31:22.097Z] [2024-11-20 08:31:22.053103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.070 [2024-11-20 08:31:22.053122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.070 [2024-11-20 08:31:22.068224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.070 [2024-11-20 08:31:22.068243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.070 [2024-11-20 08:31:22.082209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.070 [2024-11-20 08:31:22.082228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.329 [2024-11-20 08:31:22.096695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.329 [2024-11-20 08:31:22.096716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.329 [2024-11-20 08:31:22.109112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.329 [2024-11-20 08:31:22.109131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.329 [2024-11-20 08:31:22.122393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.329 [2024-11-20 08:31:22.122417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.329 [2024-11-20 08:31:22.137056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.329 [2024-11-20 08:31:22.137078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.329 [2024-11-20 08:31:22.152388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.329 [2024-11-20 08:31:22.152407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.329 [2024-11-20 08:31:22.166391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.329 [2024-11-20 08:31:22.166410] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.329 [2024-11-20 08:31:22.181279] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.329 [2024-11-20 08:31:22.181299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.329 [2024-11-20 08:31:22.197015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.329 [2024-11-20 08:31:22.197034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.329 [2024-11-20 08:31:22.212321] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.329 [2024-11-20 08:31:22.212340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.329 [2024-11-20 08:31:22.226347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.329 [2024-11-20 08:31:22.226366] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.329 [2024-11-20 08:31:22.241127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.329 [2024-11-20 08:31:22.241146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.329 [2024-11-20 08:31:22.256607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.329 [2024-11-20 08:31:22.256626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.329 [2024-11-20 08:31:22.269902] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.329 [2024-11-20 08:31:22.269921] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.329 [2024-11-20 08:31:22.284774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.329 [2024-11-20 08:31:22.284792] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.329 [2024-11-20 08:31:22.296047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.329 [2024-11-20 08:31:22.296066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.329 [2024-11-20 08:31:22.310362] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.329 [2024-11-20 08:31:22.310381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.329 [2024-11-20 08:31:22.325216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.329 [2024-11-20 08:31:22.325235] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.329 [2024-11-20 08:31:22.340089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.329 [2024-11-20 08:31:22.340108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.589 [2024-11-20 08:31:22.354488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.589 [2024-11-20 08:31:22.354508] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.589 [2024-11-20 08:31:22.369143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.589 [2024-11-20 08:31:22.369161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.589 [2024-11-20 08:31:22.383940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.589 [2024-11-20 08:31:22.383959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.589 [2024-11-20 08:31:22.395242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.589 [2024-11-20 08:31:22.395261] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.589 [2024-11-20 08:31:22.409828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.589 [2024-11-20 08:31:22.409851] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.589 [2024-11-20 08:31:22.424419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.589 [2024-11-20 08:31:22.424439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.589 [2024-11-20 08:31:22.435859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.589 [2024-11-20 08:31:22.435879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.589 [2024-11-20 08:31:22.450745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.589 [2024-11-20 08:31:22.450764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.589 [2024-11-20 08:31:22.465575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.589 [2024-11-20 08:31:22.465593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.589 [2024-11-20 08:31:22.481028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.589 [2024-11-20 08:31:22.481046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.589 [2024-11-20 08:31:22.496546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.589 [2024-11-20 08:31:22.496565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.589 [2024-11-20 08:31:22.508978] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.589 [2024-11-20 08:31:22.508997] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.589 [2024-11-20 08:31:22.522517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.589 [2024-11-20 08:31:22.522536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.589 [2024-11-20 08:31:22.537449] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.589 [2024-11-20 08:31:22.537468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.589 [2024-11-20 08:31:22.552657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.589 [2024-11-20 08:31:22.552677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.589 [2024-11-20 08:31:22.563876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.589 [2024-11-20 08:31:22.563896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.589 [2024-11-20 08:31:22.578007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.589 [2024-11-20 08:31:22.578028] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.589 [2024-11-20 08:31:22.592653] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.589 [2024-11-20 08:31:22.592674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.589 [2024-11-20 08:31:22.605452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.589 [2024-11-20 08:31:22.605473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.848 [2024-11-20 08:31:22.620570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.848 [2024-11-20 08:31:22.620590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.848 [2024-11-20 08:31:22.631790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.848 [2024-11-20 08:31:22.631809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.848 [2024-11-20 08:31:22.646617] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.848 [2024-11-20 08:31:22.646637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.848 [2024-11-20 08:31:22.661368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.848 [2024-11-20 08:31:22.661388] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.848 [2024-11-20 08:31:22.672557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.848 [2024-11-20 08:31:22.672576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.848 [2024-11-20 08:31:22.686326] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.848 [2024-11-20 08:31:22.686345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.848 [2024-11-20 08:31:22.701102] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.848 [2024-11-20 08:31:22.701122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.848 [2024-11-20 08:31:22.716075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.848 [2024-11-20 08:31:22.716094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.848 [2024-11-20 08:31:22.729140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.848 [2024-11-20 08:31:22.729160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.848 [2024-11-20 08:31:22.742025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.848 [2024-11-20 08:31:22.742045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.848 [2024-11-20 08:31:22.752401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.848 [2024-11-20 08:31:22.752420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.848 [2024-11-20 08:31:22.766257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.848 [2024-11-20 08:31:22.766276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.848 [2024-11-20 08:31:22.781428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.848 [2024-11-20 08:31:22.781448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.848 [2024-11-20 08:31:22.796641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.848 [2024-11-20 08:31:22.796662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.848 [2024-11-20 08:31:22.807873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.848 [2024-11-20 08:31:22.807892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.848 [2024-11-20 08:31:22.822213] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.848 [2024-11-20 08:31:22.822232] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.848 [2024-11-20 08:31:22.836884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.848 [2024-11-20 08:31:22.836902] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.848 [2024-11-20 08:31:22.852150] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.848 [2024-11-20 08:31:22.852169] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:08.848 [2024-11-20 08:31:22.866090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:08.848 [2024-11-20 08:31:22.866109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.107 [2024-11-20 08:31:22.880946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.107 [2024-11-20 08:31:22.880967] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.107 [2024-11-20 08:31:22.893346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.107 [2024-11-20 08:31:22.893367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.107 [2024-11-20 08:31:22.908088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.107 [2024-11-20 08:31:22.908107] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.107 [2024-11-20 08:31:22.921889] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.107 [2024-11-20 08:31:22.921909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.107 [2024-11-20 08:31:22.936541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.107 [2024-11-20 08:31:22.936561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.107 [2024-11-20 08:31:22.949228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.107 [2024-11-20 08:31:22.949247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.107 [2024-11-20 08:31:22.964385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.107 [2024-11-20 08:31:22.964405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.107 [2024-11-20 08:31:22.978516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.107 [2024-11-20 08:31:22.978535] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.107 [2024-11-20 08:31:22.993141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.107 [2024-11-20 08:31:22.993159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.107 [2024-11-20 08:31:23.008200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.107 [2024-11-20 08:31:23.008227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.107 [2024-11-20 08:31:23.022601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.107 [2024-11-20 08:31:23.022619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.107 [2024-11-20 08:31:23.037436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.107 [2024-11-20 08:31:23.037454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.107 16637.25 IOPS, 129.98 MiB/s [2024-11-20T07:31:23.135Z] [2024-11-20 08:31:23.052366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.108 [2024-11-20 08:31:23.052387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.108 [2024-11-20 08:31:23.063397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.108 [2024-11-20 08:31:23.063417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.108 [2024-11-20 08:31:23.078267] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.108 [2024-11-20 08:31:23.078285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.108 [2024-11-20 08:31:23.093163] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.108 [2024-11-20 08:31:23.093182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.108 [2024-11-20 08:31:23.108002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.108 [2024-11-20 08:31:23.108021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.108 [2024-11-20 08:31:23.122130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.108 [2024-11-20 08:31:23.122149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.367 [2024-11-20 08:31:23.137275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.367 [2024-11-20 08:31:23.137295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.367 [2024-11-20 08:31:23.152139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.367 [2024-11-20 08:31:23.152157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.367 [2024-11-20 08:31:23.165287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.367 [2024-11-20 08:31:23.165306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.367 [2024-11-20 08:31:23.177741] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.367 [2024-11-20 08:31:23.177760] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.367 [2024-11-20 08:31:23.192685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.367 [2024-11-20 08:31:23.192712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.367 [2024-11-20 08:31:23.203198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.367 [2024-11-20 08:31:23.203224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.367 [2024-11-20 08:31:23.217604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.367 [2024-11-20 08:31:23.217622] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.367 [2024-11-20 08:31:23.232161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.367 [2024-11-20 08:31:23.232180] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.367 [2024-11-20 08:31:23.246286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.367 [2024-11-20 08:31:23.246306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.367 [2024-11-20 08:31:23.261106] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.367 [2024-11-20 08:31:23.261125] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.367 [2024-11-20 08:31:23.276730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.367 [2024-11-20 08:31:23.276749] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.367 [2024-11-20 08:31:23.292905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.367 [2024-11-20 08:31:23.292924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.367 [2024-11-20 08:31:23.306408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.367 [2024-11-20 08:31:23.306426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.367 [2024-11-20 08:31:23.321029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.367 [2024-11-20 08:31:23.321047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.367 [2024-11-20 08:31:23.336177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.367 [2024-11-20 08:31:23.336196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.367 [2024-11-20 08:31:23.350113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.367 [2024-11-20 08:31:23.350132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.367 [2024-11-20 08:31:23.360918] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.367 [2024-11-20 08:31:23.360936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.367 [2024-11-20 08:31:23.374450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.367 [2024-11-20 08:31:23.374468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.367 [2024-11-20 08:31:23.389145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.367 [2024-11-20 08:31:23.389164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.627 [2024-11-20 08:31:23.404417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.627 [2024-11-20 08:31:23.404436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.627 [2024-11-20 08:31:23.416679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.627 [2024-11-20 08:31:23.416698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.627 [2024-11-20 08:31:23.430452] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.627 [2024-11-20 08:31:23.430482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.627 [2024-11-20 08:31:23.444882] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.627 [2024-11-20 08:31:23.444900] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.627 [2024-11-20 08:31:23.460109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.627 [2024-11-20 08:31:23.460134] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.627 [2024-11-20 08:31:23.474439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.627 [2024-11-20 08:31:23.474459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.627 [2024-11-20 08:31:23.488565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.627 [2024-11-20 08:31:23.488584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.627 [2024-11-20 08:31:23.500626] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.627 [2024-11-20 08:31:23.500644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.627 [2024-11-20 08:31:23.513787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.627 [2024-11-20 08:31:23.513806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.627 [2024-11-20 08:31:23.528545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.627 [2024-11-20 08:31:23.528565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.627 [2024-11-20 08:31:23.539497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.627 [2024-11-20 08:31:23.539516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.627 [2024-11-20 08:31:23.554545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.627 [2024-11-20 08:31:23.554564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.627 [2024-11-20 08:31:23.568928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.627 [2024-11-20 08:31:23.568947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.627 [2024-11-20 08:31:23.581137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.627 [2024-11-20 08:31:23.581154] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.627 [2024-11-20 08:31:23.595871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.627 [2024-11-20 08:31:23.595890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.627 [2024-11-20 08:31:23.609259] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.627 [2024-11-20 08:31:23.609277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.627 [2024-11-20 08:31:23.622071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.627 [2024-11-20 08:31:23.622090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.627 [2024-11-20 08:31:23.636964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.627 [2024-11-20 08:31:23.636982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.627 [2024-11-20 08:31:23.648463] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.627 [2024-11-20 08:31:23.648482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.887 [2024-11-20 08:31:23.662707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.887 [2024-11-20 08:31:23.662727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.887 [2024-11-20 08:31:23.677261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.887 [2024-11-20 08:31:23.677280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.887 [2024-11-20 08:31:23.692384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.887 [2024-11-20 08:31:23.692402] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.887 [2024-11-20 08:31:23.703561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.887 [2024-11-20 08:31:23.703579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.887 [2024-11-20 08:31:23.718668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.887 [2024-11-20 08:31:23.718693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.887 [2024-11-20 08:31:23.733330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.887 [2024-11-20 08:31:23.733348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.887 [2024-11-20 08:31:23.748470] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.887 [2024-11-20 08:31:23.748488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.887 [2024-11-20 08:31:23.761331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.887 [2024-11-20 08:31:23.761350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.887 [2024-11-20 08:31:23.775953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.887 [2024-11-20 08:31:23.775971] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.887 [2024-11-20 08:31:23.790492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.887 [2024-11-20 08:31:23.790511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.887 [2024-11-20 08:31:23.804984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.887 [2024-11-20 08:31:23.805002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.887 [2024-11-20 08:31:23.820335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.887 [2024-11-20 08:31:23.820354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.887 [2024-11-20 08:31:23.833191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.887 [2024-11-20 08:31:23.833215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.887 [2024-11-20 08:31:23.849121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.887 [2024-11-20 08:31:23.849141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.887 [2024-11-20 08:31:23.864256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.887 [2024-11-20 08:31:23.864276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.887 [2024-11-20 08:31:23.875456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.887 [2024-11-20 08:31:23.875474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.887 [2024-11-20 08:31:23.890499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.887 [2024-11-20 08:31:23.890518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:09.887 [2024-11-20 08:31:23.904643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:09.887 [2024-11-20 08:31:23.904662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.147 [2024-11-20 08:31:23.916554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.147 [2024-11-20 08:31:23.916574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.147 [2024-11-20 08:31:23.929942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.147 [2024-11-20 08:31:23.929960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.147 [2024-11-20 08:31:23.944534] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.147 [2024-11-20 08:31:23.944552] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.147 [2024-11-20 08:31:23.958225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.147 [2024-11-20 08:31:23.958259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.147 [2024-11-20 08:31:23.973294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.147 [2024-11-20 08:31:23.973312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.147 [2024-11-20 08:31:23.988402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.147 [2024-11-20 08:31:23.988427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.147 [2024-11-20 08:31:24.002809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.147 [2024-11-20 08:31:24.002829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.147 [2024-11-20 08:31:24.017436] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.147 [2024-11-20 08:31:24.017468] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.147 [2024-11-20 08:31:24.032183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.147 [2024-11-20 08:31:24.032209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.147 [2024-11-20 08:31:24.046050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.147 [2024-11-20 08:31:24.046071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.147 16659.80 IOPS, 130.15 MiB/s 00:34:10.147 Latency(us) 00:34:10.147 [2024-11-20T07:31:24.175Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:10.147 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:34:10.147 Nvme1n1 : 5.01 16658.94 130.15 0.00 0.00 7676.98 1950.48 14917.24 00:34:10.147 [2024-11-20T07:31:24.175Z] =================================================================================================================== 00:34:10.147 [2024-11-20T07:31:24.175Z] Total : 16658.94 130.15 0.00 0.00 7676.98 1950.48 14917.24 00:34:10.147 [2024-11-20 08:31:24.056440] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.147 [2024-11-20 08:31:24.056459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.147 [2024-11-20 08:31:24.068439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.147 [2024-11-20 08:31:24.068456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.147 [2024-11-20 08:31:24.080451] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.147 [2024-11-20 08:31:24.080466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.147 [2024-11-20 08:31:24.092442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.147 [2024-11-20 08:31:24.092470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.147 [2024-11-20 08:31:24.104439] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.147 [2024-11-20 08:31:24.104454] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.147 [2024-11-20 08:31:24.116433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.147 [2024-11-20 08:31:24.116447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.147 [2024-11-20 08:31:24.128434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.147 [2024-11-20 08:31:24.128449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.147 [2024-11-20 08:31:24.140434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.147 [2024-11-20 08:31:24.140449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.147 [2024-11-20 08:31:24.152442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.147 [2024-11-20 08:31:24.152456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.147 [2024-11-20 08:31:24.164431] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.147 [2024-11-20 08:31:24.164444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.407 [2024-11-20 08:31:24.176432] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.407 [2024-11-20 08:31:24.176444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.407 [2024-11-20 08:31:24.188434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.407 [2024-11-20 08:31:24.188448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.407 [2024-11-20 08:31:24.200429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.407 [2024-11-20 08:31:24.200440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.407 [2024-11-20 08:31:24.212430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:34:10.407 [2024-11-20 08:31:24.212441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:10.407 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (1919286) - No such process 00:34:10.407 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 1919286 00:34:10.407 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:34:10.407 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.407 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:10.407 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.407 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:34:10.407 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.407 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:10.407 delay0 00:34:10.407 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.407 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:34:10.407 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:10.407 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:10.407 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:10.407 08:31:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:34:10.407 [2024-11-20 08:31:24.360536] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:34:16.975 Initializing NVMe Controllers 00:34:16.975 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:16.975 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:34:16.975 Initialization complete. Launching workers. 00:34:16.975 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 3575 00:34:16.975 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 3842, failed to submit 53 00:34:16.975 success 3701, unsuccessful 141, failed 0 00:34:16.975 08:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:34:16.975 08:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:34:16.975 08:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:34:16.975 08:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:34:16.975 08:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:34:16.975 08:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:34:16.975 08:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:34:16.975 08:31:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:34:17.234 rmmod nvme_tcp 00:34:17.234 rmmod nvme_fabrics 00:34:17.234 rmmod nvme_keyring 00:34:17.234 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:34:17.234 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:34:17.234 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:34:17.234 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 1917468 ']' 00:34:17.234 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 1917468 00:34:17.234 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 1917468 ']' 00:34:17.234 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 1917468 00:34:17.234 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:34:17.234 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:17.234 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1917468 00:34:17.234 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:17.234 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:17.234 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1917468' 00:34:17.234 killing process with pid 1917468 00:34:17.235 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 1917468 00:34:17.235 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 1917468 00:34:17.493 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:34:17.493 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:34:17.493 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@254 -- # local dev 00:34:17.493 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@257 -- # remove_target_ns 00:34:17.493 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:17.493 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:17.493 08:31:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@258 -- # delete_main_bridge 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@121 -- # return 0 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@274 -- # iptr 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-save 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@548 -- # iptables-restore 00:34:19.397 00:34:19.397 real 0m32.013s 00:34:19.397 user 0m41.390s 00:34:19.397 sys 0m12.664s 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:34:19.397 ************************************ 00:34:19.397 END TEST nvmf_zcopy 00:34:19.397 ************************************ 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:19.397 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:19.656 ************************************ 00:34:19.656 START TEST nvmf_nmic 00:34:19.656 ************************************ 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:34:19.656 * Looking for test storage... 00:34:19.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:19.656 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.656 --rc genhtml_branch_coverage=1 00:34:19.656 --rc genhtml_function_coverage=1 00:34:19.656 --rc genhtml_legend=1 00:34:19.656 --rc geninfo_all_blocks=1 00:34:19.656 --rc geninfo_unexecuted_blocks=1 00:34:19.656 00:34:19.656 ' 00:34:19.656 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:19.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.657 --rc genhtml_branch_coverage=1 00:34:19.657 --rc genhtml_function_coverage=1 00:34:19.657 --rc genhtml_legend=1 00:34:19.657 --rc geninfo_all_blocks=1 00:34:19.657 --rc geninfo_unexecuted_blocks=1 00:34:19.657 00:34:19.657 ' 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:19.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.657 --rc genhtml_branch_coverage=1 00:34:19.657 --rc genhtml_function_coverage=1 00:34:19.657 --rc genhtml_legend=1 00:34:19.657 --rc geninfo_all_blocks=1 00:34:19.657 --rc geninfo_unexecuted_blocks=1 00:34:19.657 00:34:19.657 ' 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:19.657 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:19.657 --rc genhtml_branch_coverage=1 00:34:19.657 --rc genhtml_function_coverage=1 00:34:19.657 --rc genhtml_legend=1 00:34:19.657 --rc geninfo_all_blocks=1 00:34:19.657 --rc geninfo_unexecuted_blocks=1 00:34:19.657 00:34:19.657 ' 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # xtrace_disable 00:34:19.657 08:31:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:26.225 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@131 -- # pci_devs=() 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@131 -- # local -a pci_devs 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@132 -- # pci_net_devs=() 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@133 -- # pci_drivers=() 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@133 -- # local -A pci_drivers 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@135 -- # net_devs=() 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@135 -- # local -ga net_devs 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@136 -- # e810=() 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@136 -- # local -ga e810 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@137 -- # x722=() 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@137 -- # local -ga x722 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@138 -- # mlx=() 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@138 -- # local -ga mlx 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:26.226 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:26.226 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:26.226 Found net devices under 0000:86:00.0: cvl_0_0 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:26.226 Found net devices under 0000:86:00.1: cvl_0_1 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # is_hw=yes 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:34:26.226 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@247 -- # create_target_ns 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:34:26.227 10.0.0.1 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:34:26.227 10.0.0.2 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:26.227 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 1 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:34:26.228 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:26.228 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.495 ms 00:34:26.228 00:34:26.228 --- 10.0.0.1 ping statistics --- 00:34:26.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.228 rtt min/avg/max/mdev = 0.495/0.495/0.495/0.000 ms 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:34:26.228 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:26.228 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.240 ms 00:34:26.228 00:34:26.228 --- 10.0.0.2 ping statistics --- 00:34:26.228 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:26.228 rtt min/avg/max/mdev = 0.240/0.240/0.240/0.000 ms 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair++ )) 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@270 -- # return 0 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:34:26.228 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=initiator1 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # return 1 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev= 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@160 -- # return 0 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target0 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target0 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # get_net_dev target1 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # local dev=target1 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # return 1 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@159 -- # dev= 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@160 -- # return 0 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:34:26.229 ' 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=1924690 00:34:26.229 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 1924690 00:34:26.230 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:26.230 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 1924690 ']' 00:34:26.230 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:26.230 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:26.230 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:26.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:26.230 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:26.230 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:26.230 [2024-11-20 08:31:39.699023] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:26.230 [2024-11-20 08:31:39.699970] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:34:26.230 [2024-11-20 08:31:39.700009] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:26.230 [2024-11-20 08:31:39.777432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:26.230 [2024-11-20 08:31:39.820721] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:26.230 [2024-11-20 08:31:39.820758] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:26.230 [2024-11-20 08:31:39.820765] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:26.230 [2024-11-20 08:31:39.820771] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:26.230 [2024-11-20 08:31:39.820776] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:26.230 [2024-11-20 08:31:39.822196] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:26.230 [2024-11-20 08:31:39.822306] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:26.230 [2024-11-20 08:31:39.822341] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:26.230 [2024-11-20 08:31:39.822342] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:26.230 [2024-11-20 08:31:39.889927] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:26.230 [2024-11-20 08:31:39.890694] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:26.230 [2024-11-20 08:31:39.890911] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:26.230 [2024-11-20 08:31:39.891392] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:26.230 [2024-11-20 08:31:39.891425] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:26.230 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:26.230 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:34:26.230 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:26.230 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:26.230 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:26.230 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:26.230 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:26.230 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.230 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:26.230 [2024-11-20 08:31:39.956764] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:26.230 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.230 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:26.230 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.230 08:31:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:26.230 Malloc0 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:26.230 [2024-11-20 08:31:40.039307] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:34:26.230 test case1: single bdev can't be used in multiple subsystems 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.230 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:26.230 [2024-11-20 08:31:40.070943] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:34:26.231 [2024-11-20 08:31:40.070970] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:34:26.231 [2024-11-20 08:31:40.070978] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:34:26.231 request: 00:34:26.231 { 00:34:26.231 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:34:26.231 "namespace": { 00:34:26.231 "bdev_name": "Malloc0", 00:34:26.231 "no_auto_visible": false 00:34:26.231 }, 00:34:26.231 "method": "nvmf_subsystem_add_ns", 00:34:26.231 "req_id": 1 00:34:26.231 } 00:34:26.231 Got JSON-RPC error response 00:34:26.231 response: 00:34:26.231 { 00:34:26.231 "code": -32602, 00:34:26.231 "message": "Invalid parameters" 00:34:26.231 } 00:34:26.231 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:34:26.231 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:34:26.231 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:34:26.231 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:34:26.231 Adding namespace failed - expected result. 00:34:26.231 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:34:26.231 test case2: host connect to nvmf target in multiple paths 00:34:26.231 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:34:26.231 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.231 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:26.231 [2024-11-20 08:31:40.083060] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:34:26.231 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:26.231 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:26.490 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:34:26.748 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:34:26.748 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:34:26.748 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:26.748 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:34:26.748 08:31:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:34:28.651 08:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:28.651 08:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:28.651 08:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:28.651 08:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:34:28.651 08:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:28.651 08:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:34:28.651 08:31:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:28.651 [global] 00:34:28.651 thread=1 00:34:28.651 invalidate=1 00:34:28.651 rw=write 00:34:28.651 time_based=1 00:34:28.651 runtime=1 00:34:28.651 ioengine=libaio 00:34:28.651 direct=1 00:34:28.651 bs=4096 00:34:28.651 iodepth=1 00:34:28.651 norandommap=0 00:34:28.651 numjobs=1 00:34:28.651 00:34:28.651 verify_dump=1 00:34:28.651 verify_backlog=512 00:34:28.651 verify_state_save=0 00:34:28.651 do_verify=1 00:34:28.651 verify=crc32c-intel 00:34:28.651 [job0] 00:34:28.651 filename=/dev/nvme0n1 00:34:28.651 Could not set queue depth (nvme0n1) 00:34:28.909 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:28.909 fio-3.35 00:34:28.909 Starting 1 thread 00:34:30.285 00:34:30.285 job0: (groupid=0, jobs=1): err= 0: pid=1925306: Wed Nov 20 08:31:44 2024 00:34:30.285 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:34:30.285 slat (nsec): min=7083, max=40299, avg=8118.61, stdev=1298.87 00:34:30.285 clat (usec): min=167, max=374, avg=194.85, stdev=18.42 00:34:30.285 lat (usec): min=183, max=382, avg=202.97, stdev=18.45 00:34:30.285 clat percentiles (usec): 00:34:30.285 | 1.00th=[ 182], 5.00th=[ 184], 10.00th=[ 184], 20.00th=[ 186], 00:34:30.285 | 30.00th=[ 188], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 192], 00:34:30.285 | 70.00th=[ 194], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 253], 00:34:30.285 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 277], 99.95th=[ 285], 00:34:30.285 | 99.99th=[ 375] 00:34:30.285 write: IOPS=2559, BW=10.00MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:34:30.285 slat (usec): min=10, max=23991, avg=21.39, stdev=473.76 00:34:30.285 clat (usec): min=114, max=302, avg=158.60, stdev=41.32 00:34:30.285 lat (usec): min=140, max=24250, avg=179.99, stdev=477.53 00:34:30.285 clat percentiles (usec): 00:34:30.285 | 1.00th=[ 131], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 135], 00:34:30.285 | 30.00th=[ 137], 40.00th=[ 137], 50.00th=[ 139], 60.00th=[ 141], 00:34:30.285 | 70.00th=[ 145], 80.00th=[ 221], 90.00th=[ 241], 95.00th=[ 243], 00:34:30.285 | 99.00th=[ 245], 99.50th=[ 247], 99.90th=[ 265], 99.95th=[ 281], 00:34:30.285 | 99.99th=[ 302] 00:34:30.285 bw ( KiB/s): min=10536, max=10536, per=100.00%, avg=10536.00, stdev= 0.00, samples=1 00:34:30.285 iops : min= 2634, max= 2634, avg=2634.00, stdev= 0.00, samples=1 00:34:30.285 lat (usec) : 250=96.72%, 500=3.28% 00:34:30.285 cpu : usr=3.00%, sys=9.40%, ctx=5124, majf=0, minf=1 00:34:30.285 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:30.285 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.285 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:30.285 issued rwts: total=2560,2562,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:30.285 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:30.285 00:34:30.285 Run status group 0 (all jobs): 00:34:30.285 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:34:30.285 WRITE: bw=10.00MiB/s (10.5MB/s), 10.00MiB/s-10.00MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:34:30.285 00:34:30.285 Disk stats (read/write): 00:34:30.285 nvme0n1: ios=2091/2560, merge=0/0, ticks=1360/366, in_queue=1726, util=98.20% 00:34:30.285 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:30.285 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:34:30.285 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:30.285 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:34:30.285 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:30.285 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:30.285 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:30.285 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:30.285 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:34:30.285 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:34:30.285 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:34:30.285 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:34:30.286 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:34:30.286 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:34:30.286 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:34:30.286 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:34:30.286 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:34:30.286 rmmod nvme_tcp 00:34:30.286 rmmod nvme_fabrics 00:34:30.286 rmmod nvme_keyring 00:34:30.286 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:34:30.286 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:34:30.286 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:34:30.286 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 1924690 ']' 00:34:30.286 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 1924690 00:34:30.286 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 1924690 ']' 00:34:30.286 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 1924690 00:34:30.286 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:34:30.286 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:30.286 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1924690 00:34:30.286 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:30.286 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:30.286 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1924690' 00:34:30.286 killing process with pid 1924690 00:34:30.286 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 1924690 00:34:30.286 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 1924690 00:34:30.545 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:34:30.545 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:34:30.545 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@254 -- # local dev 00:34:30.545 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@257 -- # remove_target_ns 00:34:30.545 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:30.545 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:30.545 08:31:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:33.081 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@258 -- # delete_main_bridge 00:34:33.081 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:34:33.081 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@121 -- # return 0 00:34:33.081 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:33.081 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:34:33.081 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:34:33.081 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:34:33.081 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:34:33.081 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:34:33.081 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:34:33.081 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:34:33.081 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:33.081 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:34:33.081 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:34:33.081 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:34:33.081 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:34:33.081 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:34:33.081 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:34:33.081 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@274 -- # iptr 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-save 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@548 -- # iptables-restore 00:34:33.082 00:34:33.082 real 0m13.098s 00:34:33.082 user 0m23.731s 00:34:33.082 sys 0m6.153s 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:34:33.082 ************************************ 00:34:33.082 END TEST nvmf_nmic 00:34:33.082 ************************************ 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:33.082 ************************************ 00:34:33.082 START TEST nvmf_fio_target 00:34:33.082 ************************************ 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:34:33.082 * Looking for test storage... 00:34:33.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:33.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:33.082 --rc genhtml_branch_coverage=1 00:34:33.082 --rc genhtml_function_coverage=1 00:34:33.082 --rc genhtml_legend=1 00:34:33.082 --rc geninfo_all_blocks=1 00:34:33.082 --rc geninfo_unexecuted_blocks=1 00:34:33.082 00:34:33.082 ' 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:33.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:33.082 --rc genhtml_branch_coverage=1 00:34:33.082 --rc genhtml_function_coverage=1 00:34:33.082 --rc genhtml_legend=1 00:34:33.082 --rc geninfo_all_blocks=1 00:34:33.082 --rc geninfo_unexecuted_blocks=1 00:34:33.082 00:34:33.082 ' 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:33.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:33.082 --rc genhtml_branch_coverage=1 00:34:33.082 --rc genhtml_function_coverage=1 00:34:33.082 --rc genhtml_legend=1 00:34:33.082 --rc geninfo_all_blocks=1 00:34:33.082 --rc geninfo_unexecuted_blocks=1 00:34:33.082 00:34:33.082 ' 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:33.082 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:33.082 --rc genhtml_branch_coverage=1 00:34:33.082 --rc genhtml_function_coverage=1 00:34:33.082 --rc genhtml_legend=1 00:34:33.082 --rc geninfo_all_blocks=1 00:34:33.082 --rc geninfo_unexecuted_blocks=1 00:34:33.082 00:34:33.082 ' 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:33.082 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # xtrace_disable 00:34:33.083 08:31:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:38.404 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:38.404 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@131 -- # pci_devs=() 00:34:38.404 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:34:38.404 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:34:38.404 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:34:38.404 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:34:38.404 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:34:38.404 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@135 -- # net_devs=() 00:34:38.404 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:34:38.404 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@136 -- # e810=() 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@136 -- # local -ga e810 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@137 -- # x722=() 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@137 -- # local -ga x722 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@138 -- # mlx=() 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@138 -- # local -ga mlx 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:38.678 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:38.678 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:38.678 Found net devices under 0000:86:00.0: cvl_0_0 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:38.678 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:38.679 Found net devices under 0000:86:00.1: cvl_0_1 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # is_hw=yes 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@247 -- # create_target_ns 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:34:38.679 10.0.0.1 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:34:38.679 10.0.0.2 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:34:38.679 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:38.680 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:34:38.983 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:38.983 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.478 ms 00:34:38.983 00:34:38.983 --- 10.0.0.1 ping statistics --- 00:34:38.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.983 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:34:38.983 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:38.983 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:34:38.983 00:34:38.983 --- 10.0.0.2 ping statistics --- 00:34:38.983 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:38.983 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair++ )) 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@270 -- # return 0 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator0 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:34:38.983 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=initiator1 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # return 1 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev= 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@160 -- # return 0 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target0 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target0 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # get_net_dev target1 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # local dev=target1 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # return 1 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@159 -- # dev= 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@160 -- # return 0 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:34:38.984 ' 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=1929080 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 1929080 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 1929080 ']' 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:38.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:38.984 08:31:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:38.984 [2024-11-20 08:31:52.885363] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:34:38.984 [2024-11-20 08:31:52.886264] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:34:38.984 [2024-11-20 08:31:52.886297] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:38.984 [2024-11-20 08:31:52.961306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:39.244 [2024-11-20 08:31:53.003751] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:39.244 [2024-11-20 08:31:53.003789] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:39.244 [2024-11-20 08:31:53.003796] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:39.244 [2024-11-20 08:31:53.003802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:39.244 [2024-11-20 08:31:53.003807] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:39.244 [2024-11-20 08:31:53.005376] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:39.244 [2024-11-20 08:31:53.005483] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:39.245 [2024-11-20 08:31:53.005594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:39.245 [2024-11-20 08:31:53.005594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:39.245 [2024-11-20 08:31:53.073003] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:34:39.245 [2024-11-20 08:31:53.074053] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:34:39.245 [2024-11-20 08:31:53.074284] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:34:39.245 [2024-11-20 08:31:53.074512] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:34:39.245 [2024-11-20 08:31:53.074573] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:34:39.245 08:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:39.245 08:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:34:39.245 08:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:39.245 08:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:39.245 08:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:39.245 08:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:39.245 08:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:34:39.504 [2024-11-20 08:31:53.314308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:39.504 08:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:39.763 08:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:34:39.763 08:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:40.023 08:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:34:40.023 08:31:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:40.023 08:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:34:40.023 08:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:40.282 08:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:34:40.282 08:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:34:40.541 08:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:40.800 08:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:34:40.800 08:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:41.059 08:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:34:41.059 08:31:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:34:41.059 08:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:34:41.059 08:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:34:41.318 08:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:34:41.576 08:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:41.576 08:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:41.576 08:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:34:41.576 08:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:34:41.835 08:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:42.094 [2024-11-20 08:31:55.946173] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:42.094 08:31:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:34:42.352 08:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:34:42.611 08:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:34:42.870 08:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:34:42.870 08:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:34:42.870 08:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:34:42.870 08:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:34:42.870 08:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:34:42.870 08:31:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:34:44.776 08:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:34:44.776 08:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:34:44.776 08:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:34:44.776 08:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:34:44.776 08:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:34:44.776 08:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:34:44.776 08:31:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:34:44.776 [global] 00:34:44.776 thread=1 00:34:44.776 invalidate=1 00:34:44.776 rw=write 00:34:44.776 time_based=1 00:34:44.776 runtime=1 00:34:44.776 ioengine=libaio 00:34:44.776 direct=1 00:34:44.776 bs=4096 00:34:44.776 iodepth=1 00:34:44.776 norandommap=0 00:34:44.776 numjobs=1 00:34:44.776 00:34:44.776 verify_dump=1 00:34:44.776 verify_backlog=512 00:34:44.776 verify_state_save=0 00:34:44.776 do_verify=1 00:34:44.776 verify=crc32c-intel 00:34:44.776 [job0] 00:34:44.776 filename=/dev/nvme0n1 00:34:44.776 [job1] 00:34:44.776 filename=/dev/nvme0n2 00:34:44.776 [job2] 00:34:44.776 filename=/dev/nvme0n3 00:34:44.776 [job3] 00:34:44.776 filename=/dev/nvme0n4 00:34:44.776 Could not set queue depth (nvme0n1) 00:34:44.776 Could not set queue depth (nvme0n2) 00:34:44.776 Could not set queue depth (nvme0n3) 00:34:44.776 Could not set queue depth (nvme0n4) 00:34:45.035 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:45.035 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:45.035 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:45.035 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:45.035 fio-3.35 00:34:45.035 Starting 4 threads 00:34:46.414 00:34:46.414 job0: (groupid=0, jobs=1): err= 0: pid=1930268: Wed Nov 20 08:32:00 2024 00:34:46.414 read: IOPS=20, BW=82.9KiB/s (84.9kB/s)(84.0KiB/1013msec) 00:34:46.414 slat (nsec): min=9509, max=23851, avg=22881.29, stdev=3069.47 00:34:46.414 clat (usec): min=40881, max=41044, avg=40969.14, stdev=44.95 00:34:46.414 lat (usec): min=40894, max=41067, avg=40992.02, stdev=46.32 00:34:46.414 clat percentiles (usec): 00:34:46.414 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:46.414 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:46.414 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:46.414 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:46.414 | 99.99th=[41157] 00:34:46.414 write: IOPS=505, BW=2022KiB/s (2070kB/s)(2048KiB/1013msec); 0 zone resets 00:34:46.414 slat (usec): min=9, max=12128, avg=34.25, stdev=535.54 00:34:46.414 clat (usec): min=149, max=439, avg=254.29, stdev=55.32 00:34:46.414 lat (usec): min=159, max=12534, avg=288.55, stdev=545.03 00:34:46.414 clat percentiles (usec): 00:34:46.414 | 1.00th=[ 159], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 198], 00:34:46.414 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 265], 00:34:46.414 | 70.00th=[ 273], 80.00th=[ 310], 90.00th=[ 330], 95.00th=[ 351], 00:34:46.414 | 99.00th=[ 388], 99.50th=[ 408], 99.90th=[ 441], 99.95th=[ 441], 00:34:46.414 | 99.99th=[ 441] 00:34:46.414 bw ( KiB/s): min= 4096, max= 4096, per=20.38%, avg=4096.00, stdev= 0.00, samples=1 00:34:46.414 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:46.414 lat (usec) : 250=50.84%, 500=45.22% 00:34:46.414 lat (msec) : 50=3.94% 00:34:46.414 cpu : usr=0.40%, sys=0.30%, ctx=535, majf=0, minf=1 00:34:46.414 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:46.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.414 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.414 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:46.414 job1: (groupid=0, jobs=1): err= 0: pid=1930287: Wed Nov 20 08:32:00 2024 00:34:46.414 read: IOPS=26, BW=106KiB/s (109kB/s)(108KiB/1019msec) 00:34:46.414 slat (nsec): min=8136, max=25497, avg=20812.48, stdev=5962.74 00:34:46.414 clat (usec): min=304, max=41141, avg=33390.77, stdev=16057.33 00:34:46.414 lat (usec): min=329, max=41165, avg=33411.58, stdev=16058.41 00:34:46.414 clat percentiles (usec): 00:34:46.414 | 1.00th=[ 306], 5.00th=[ 314], 10.00th=[ 338], 20.00th=[40633], 00:34:46.414 | 30.00th=[40633], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:34:46.414 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:46.414 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:46.414 | 99.99th=[41157] 00:34:46.414 write: IOPS=502, BW=2010KiB/s (2058kB/s)(2048KiB/1019msec); 0 zone resets 00:34:46.414 slat (nsec): min=10811, max=48548, avg=12605.09, stdev=2785.77 00:34:46.414 clat (usec): min=126, max=663, avg=205.38, stdev=41.84 00:34:46.414 lat (usec): min=152, max=675, avg=217.99, stdev=41.78 00:34:46.414 clat percentiles (usec): 00:34:46.414 | 1.00th=[ 145], 5.00th=[ 147], 10.00th=[ 155], 20.00th=[ 174], 00:34:46.414 | 30.00th=[ 180], 40.00th=[ 192], 50.00th=[ 200], 60.00th=[ 215], 00:34:46.415 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 245], 95.00th=[ 253], 00:34:46.415 | 99.00th=[ 314], 99.50th=[ 359], 99.90th=[ 668], 99.95th=[ 668], 00:34:46.415 | 99.99th=[ 668] 00:34:46.415 bw ( KiB/s): min= 4096, max= 4096, per=20.38%, avg=4096.00, stdev= 0.00, samples=1 00:34:46.415 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:46.415 lat (usec) : 250=89.42%, 500=6.31%, 750=0.19% 00:34:46.415 lat (msec) : 50=4.08% 00:34:46.415 cpu : usr=0.39%, sys=0.98%, ctx=540, majf=0, minf=2 00:34:46.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:46.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.415 issued rwts: total=27,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:46.415 job2: (groupid=0, jobs=1): err= 0: pid=1930316: Wed Nov 20 08:32:00 2024 00:34:46.415 read: IOPS=1022, BW=4091KiB/s (4189kB/s)(4144KiB/1013msec) 00:34:46.415 slat (nsec): min=4154, max=23259, avg=6453.77, stdev=809.98 00:34:46.415 clat (usec): min=189, max=41074, avg=702.76, stdev=4361.19 00:34:46.415 lat (usec): min=195, max=41083, avg=709.21, stdev=4361.48 00:34:46.415 clat percentiles (usec): 00:34:46.415 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 206], 20.00th=[ 210], 00:34:46.415 | 30.00th=[ 215], 40.00th=[ 221], 50.00th=[ 235], 60.00th=[ 245], 00:34:46.415 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 253], 95.00th=[ 258], 00:34:46.415 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:46.415 | 99.99th=[41157] 00:34:46.415 write: IOPS=1516, BW=6065KiB/s (6211kB/s)(6144KiB/1013msec); 0 zone resets 00:34:46.415 slat (usec): min=5, max=12254, avg=15.43, stdev=312.48 00:34:46.415 clat (usec): min=126, max=1955, avg=160.46, stdev=49.24 00:34:46.415 lat (usec): min=134, max=12555, avg=175.89, stdev=319.99 00:34:46.415 clat percentiles (usec): 00:34:46.415 | 1.00th=[ 133], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 147], 00:34:46.415 | 30.00th=[ 151], 40.00th=[ 155], 50.00th=[ 157], 60.00th=[ 161], 00:34:46.415 | 70.00th=[ 165], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 184], 00:34:46.415 | 99.00th=[ 206], 99.50th=[ 269], 99.90th=[ 474], 99.95th=[ 1958], 00:34:46.415 | 99.99th=[ 1958] 00:34:46.415 bw ( KiB/s): min= 920, max=11368, per=30.57%, avg=6144.00, stdev=7387.85, samples=2 00:34:46.415 iops : min= 230, max= 2842, avg=1536.00, stdev=1846.96, samples=2 00:34:46.415 lat (usec) : 250=91.37%, 500=8.13% 00:34:46.415 lat (msec) : 2=0.04%, 50=0.47% 00:34:46.415 cpu : usr=0.89%, sys=1.68%, ctx=2575, majf=0, minf=1 00:34:46.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:46.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.415 issued rwts: total=1036,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:46.415 job3: (groupid=0, jobs=1): err= 0: pid=1930320: Wed Nov 20 08:32:00 2024 00:34:46.415 read: IOPS=2059, BW=8240KiB/s (8438kB/s)(8248KiB/1001msec) 00:34:46.415 slat (nsec): min=7747, max=47836, avg=9051.56, stdev=1692.68 00:34:46.415 clat (usec): min=188, max=1559, avg=244.55, stdev=34.13 00:34:46.415 lat (usec): min=197, max=1567, avg=253.60, stdev=34.12 00:34:46.415 clat percentiles (usec): 00:34:46.415 | 1.00th=[ 219], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:34:46.415 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 243], 60.00th=[ 245], 00:34:46.415 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 258], 00:34:46.415 | 99.00th=[ 281], 99.50th=[ 367], 99.90th=[ 537], 99.95th=[ 570], 00:34:46.415 | 99.99th=[ 1565] 00:34:46.415 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:34:46.415 slat (usec): min=11, max=12193, avg=17.74, stdev=240.74 00:34:46.415 clat (usec): min=123, max=586, avg=161.87, stdev=27.50 00:34:46.415 lat (usec): min=140, max=12429, avg=179.61, stdev=243.81 00:34:46.415 clat percentiles (usec): 00:34:46.415 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 141], 00:34:46.415 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 155], 60.00th=[ 167], 00:34:46.415 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 196], 95.00th=[ 210], 00:34:46.415 | 99.00th=[ 231], 99.50th=[ 239], 99.90th=[ 416], 99.95th=[ 424], 00:34:46.415 | 99.99th=[ 586] 00:34:46.415 bw ( KiB/s): min= 8944, max= 8944, per=44.50%, avg=8944.00, stdev= 0.00, samples=1 00:34:46.415 iops : min= 2236, max= 2236, avg=2236.00, stdev= 0.00, samples=1 00:34:46.415 lat (usec) : 250=92.36%, 500=7.53%, 750=0.09% 00:34:46.415 lat (msec) : 2=0.02% 00:34:46.415 cpu : usr=3.60%, sys=8.40%, ctx=4624, majf=0, minf=1 00:34:46.415 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:46.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:46.415 issued rwts: total=2062,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:46.415 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:46.415 00:34:46.415 Run status group 0 (all jobs): 00:34:46.415 READ: bw=12.1MiB/s (12.6MB/s), 82.9KiB/s-8240KiB/s (84.9kB/s-8438kB/s), io=12.3MiB (12.9MB), run=1001-1019msec 00:34:46.415 WRITE: bw=19.6MiB/s (20.6MB/s), 2010KiB/s-9.99MiB/s (2058kB/s-10.5MB/s), io=20.0MiB (21.0MB), run=1001-1019msec 00:34:46.415 00:34:46.415 Disk stats (read/write): 00:34:46.415 nvme0n1: ios=38/512, merge=0/0, ticks=1477/129, in_queue=1606, util=87.37% 00:34:46.415 nvme0n2: ios=70/512, merge=0/0, ticks=923/100, in_queue=1023, util=88.67% 00:34:46.415 nvme0n3: ios=1055/1536, merge=0/0, ticks=1387/243, in_queue=1630, util=95.43% 00:34:46.415 nvme0n4: ios=1637/2048, merge=0/0, ticks=1270/322, in_queue=1592, util=100.00% 00:34:46.415 08:32:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:34:46.415 [global] 00:34:46.415 thread=1 00:34:46.415 invalidate=1 00:34:46.415 rw=randwrite 00:34:46.415 time_based=1 00:34:46.415 runtime=1 00:34:46.415 ioengine=libaio 00:34:46.415 direct=1 00:34:46.415 bs=4096 00:34:46.415 iodepth=1 00:34:46.415 norandommap=0 00:34:46.415 numjobs=1 00:34:46.415 00:34:46.415 verify_dump=1 00:34:46.415 verify_backlog=512 00:34:46.415 verify_state_save=0 00:34:46.415 do_verify=1 00:34:46.415 verify=crc32c-intel 00:34:46.415 [job0] 00:34:46.415 filename=/dev/nvme0n1 00:34:46.415 [job1] 00:34:46.415 filename=/dev/nvme0n2 00:34:46.415 [job2] 00:34:46.415 filename=/dev/nvme0n3 00:34:46.415 [job3] 00:34:46.415 filename=/dev/nvme0n4 00:34:46.415 Could not set queue depth (nvme0n1) 00:34:46.415 Could not set queue depth (nvme0n2) 00:34:46.415 Could not set queue depth (nvme0n3) 00:34:46.415 Could not set queue depth (nvme0n4) 00:34:46.675 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:46.675 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:46.675 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:46.675 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:46.675 fio-3.35 00:34:46.675 Starting 4 threads 00:34:48.053 00:34:48.053 job0: (groupid=0, jobs=1): err= 0: pid=1930726: Wed Nov 20 08:32:01 2024 00:34:48.053 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:34:48.053 slat (nsec): min=7180, max=37043, avg=8198.22, stdev=1206.31 00:34:48.053 clat (usec): min=178, max=384, avg=198.85, stdev=17.69 00:34:48.053 lat (usec): min=186, max=394, avg=207.05, stdev=17.78 00:34:48.053 clat percentiles (usec): 00:34:48.053 | 1.00th=[ 184], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 188], 00:34:48.053 | 30.00th=[ 190], 40.00th=[ 192], 50.00th=[ 192], 60.00th=[ 194], 00:34:48.053 | 70.00th=[ 198], 80.00th=[ 204], 90.00th=[ 223], 95.00th=[ 247], 00:34:48.053 | 99.00th=[ 258], 99.50th=[ 260], 99.90th=[ 273], 99.95th=[ 277], 00:34:48.053 | 99.99th=[ 383] 00:34:48.053 write: IOPS=2596, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1001msec); 0 zone resets 00:34:48.053 slat (usec): min=10, max=25390, avg=21.49, stdev=497.82 00:34:48.053 clat (usec): min=125, max=340, avg=153.43, stdev=23.25 00:34:48.053 lat (usec): min=140, max=25668, avg=174.92, stdev=500.80 00:34:48.053 clat percentiles (usec): 00:34:48.053 | 1.00th=[ 133], 5.00th=[ 133], 10.00th=[ 135], 20.00th=[ 137], 00:34:48.053 | 30.00th=[ 137], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 145], 00:34:48.053 | 70.00th=[ 169], 80.00th=[ 182], 90.00th=[ 186], 95.00th=[ 190], 00:34:48.053 | 99.00th=[ 237], 99.50th=[ 247], 99.90th=[ 277], 99.95th=[ 318], 00:34:48.053 | 99.99th=[ 343] 00:34:48.053 bw ( KiB/s): min=12288, max=12288, per=75.11%, avg=12288.00, stdev= 0.00, samples=1 00:34:48.053 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:34:48.053 lat (usec) : 250=98.27%, 500=1.73% 00:34:48.053 cpu : usr=4.30%, sys=8.10%, ctx=5163, majf=0, minf=1 00:34:48.053 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.053 issued rwts: total=2560,2599,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.053 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:48.053 job1: (groupid=0, jobs=1): err= 0: pid=1930746: Wed Nov 20 08:32:01 2024 00:34:48.053 read: IOPS=294, BW=1179KiB/s (1207kB/s)(1180KiB/1001msec) 00:34:48.053 slat (nsec): min=6624, max=25891, avg=7663.44, stdev=1911.04 00:34:48.053 clat (usec): min=191, max=41073, avg=3045.95, stdev=10247.52 00:34:48.053 lat (usec): min=199, max=41081, avg=3053.62, stdev=10248.26 00:34:48.053 clat percentiles (usec): 00:34:48.053 | 1.00th=[ 202], 5.00th=[ 253], 10.00th=[ 281], 20.00th=[ 281], 00:34:48.053 | 30.00th=[ 285], 40.00th=[ 285], 50.00th=[ 289], 60.00th=[ 289], 00:34:48.053 | 70.00th=[ 293], 80.00th=[ 297], 90.00th=[ 318], 95.00th=[41157], 00:34:48.053 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:48.053 | 99.99th=[41157] 00:34:48.053 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:34:48.053 slat (nsec): min=9257, max=48526, avg=10786.97, stdev=2615.68 00:34:48.053 clat (usec): min=144, max=349, avg=179.81, stdev=17.29 00:34:48.053 lat (usec): min=156, max=390, avg=190.60, stdev=17.96 00:34:48.053 clat percentiles (usec): 00:34:48.053 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 167], 00:34:48.053 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 184], 00:34:48.053 | 70.00th=[ 188], 80.00th=[ 192], 90.00th=[ 196], 95.00th=[ 202], 00:34:48.053 | 99.00th=[ 219], 99.50th=[ 225], 99.90th=[ 351], 99.95th=[ 351], 00:34:48.053 | 99.99th=[ 351] 00:34:48.053 bw ( KiB/s): min= 4096, max= 4096, per=25.04%, avg=4096.00, stdev= 0.00, samples=1 00:34:48.053 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:48.053 lat (usec) : 250=64.68%, 500=32.84% 00:34:48.053 lat (msec) : 50=2.48% 00:34:48.053 cpu : usr=0.30%, sys=0.80%, ctx=807, majf=0, minf=1 00:34:48.053 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.053 issued rwts: total=295,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.053 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:48.053 job2: (groupid=0, jobs=1): err= 0: pid=1930762: Wed Nov 20 08:32:01 2024 00:34:48.053 read: IOPS=60, BW=241KiB/s (247kB/s)(244KiB/1011msec) 00:34:48.053 slat (nsec): min=7175, max=24275, avg=13058.79, stdev=7141.46 00:34:48.053 clat (usec): min=193, max=41226, avg=14920.23, stdev=19682.10 00:34:48.053 lat (usec): min=201, max=41237, avg=14933.29, stdev=19688.54 00:34:48.053 clat percentiles (usec): 00:34:48.053 | 1.00th=[ 194], 5.00th=[ 219], 10.00th=[ 223], 20.00th=[ 239], 00:34:48.053 | 30.00th=[ 253], 40.00th=[ 265], 50.00th=[ 297], 60.00th=[ 310], 00:34:48.053 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:48.053 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:48.053 | 99.99th=[41157] 00:34:48.053 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:34:48.053 slat (nsec): min=9230, max=40384, avg=10483.13, stdev=1782.21 00:34:48.053 clat (usec): min=159, max=380, avg=181.18, stdev=14.91 00:34:48.053 lat (usec): min=168, max=421, avg=191.67, stdev=15.81 00:34:48.053 clat percentiles (usec): 00:34:48.053 | 1.00th=[ 165], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:34:48.053 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 182], 00:34:48.053 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 200], 00:34:48.053 | 99.00th=[ 221], 99.50th=[ 258], 99.90th=[ 379], 99.95th=[ 379], 00:34:48.053 | 99.99th=[ 379] 00:34:48.053 bw ( KiB/s): min= 4096, max= 4096, per=25.04%, avg=4096.00, stdev= 0.00, samples=1 00:34:48.053 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:48.053 lat (usec) : 250=91.80%, 500=4.36% 00:34:48.053 lat (msec) : 50=3.84% 00:34:48.053 cpu : usr=0.10%, sys=0.69%, ctx=574, majf=0, minf=1 00:34:48.053 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.053 issued rwts: total=61,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.053 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:48.053 job3: (groupid=0, jobs=1): err= 0: pid=1930768: Wed Nov 20 08:32:01 2024 00:34:48.053 read: IOPS=21, BW=87.2KiB/s (89.3kB/s)(88.0KiB/1009msec) 00:34:48.053 slat (nsec): min=10170, max=24968, avg=23960.50, stdev=3085.22 00:34:48.053 clat (usec): min=40887, max=41938, avg=41010.28, stdev=210.27 00:34:48.053 lat (usec): min=40911, max=41963, avg=41034.24, stdev=210.47 00:34:48.053 clat percentiles (usec): 00:34:48.053 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:48.053 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:48.053 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:48.053 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:34:48.053 | 99.99th=[41681] 00:34:48.053 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:34:48.053 slat (nsec): min=10333, max=39904, avg=11505.29, stdev=1855.18 00:34:48.053 clat (usec): min=163, max=334, avg=191.82, stdev=14.27 00:34:48.053 lat (usec): min=173, max=374, avg=203.32, stdev=15.03 00:34:48.053 clat percentiles (usec): 00:34:48.053 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 180], 20.00th=[ 184], 00:34:48.053 | 30.00th=[ 186], 40.00th=[ 188], 50.00th=[ 190], 60.00th=[ 192], 00:34:48.053 | 70.00th=[ 196], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 215], 00:34:48.053 | 99.00th=[ 227], 99.50th=[ 265], 99.90th=[ 334], 99.95th=[ 334], 00:34:48.053 | 99.99th=[ 334] 00:34:48.053 bw ( KiB/s): min= 4096, max= 4096, per=25.04%, avg=4096.00, stdev= 0.00, samples=1 00:34:48.053 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:34:48.053 lat (usec) : 250=95.13%, 500=0.75% 00:34:48.053 lat (msec) : 50=4.12% 00:34:48.053 cpu : usr=0.79%, sys=0.50%, ctx=535, majf=0, minf=1 00:34:48.053 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:48.053 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.053 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:48.054 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:48.054 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:48.054 00:34:48.054 Run status group 0 (all jobs): 00:34:48.054 READ: bw=11.4MiB/s (11.9MB/s), 87.2KiB/s-9.99MiB/s (89.3kB/s-10.5MB/s), io=11.5MiB (12.0MB), run=1001-1011msec 00:34:48.054 WRITE: bw=16.0MiB/s (16.8MB/s), 2026KiB/s-10.1MiB/s (2074kB/s-10.6MB/s), io=16.2MiB (16.9MB), run=1001-1011msec 00:34:48.054 00:34:48.054 Disk stats (read/write): 00:34:48.054 nvme0n1: ios=2073/2286, merge=0/0, ticks=1378/320, in_queue=1698, util=96.79% 00:34:48.054 nvme0n2: ios=18/512, merge=0/0, ticks=738/94, in_queue=832, util=86.52% 00:34:48.054 nvme0n3: ios=18/512, merge=0/0, ticks=738/92, in_queue=830, util=88.81% 00:34:48.054 nvme0n4: ios=17/512, merge=0/0, ticks=698/95, in_queue=793, util=89.47% 00:34:48.054 08:32:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:34:48.054 [global] 00:34:48.054 thread=1 00:34:48.054 invalidate=1 00:34:48.054 rw=write 00:34:48.054 time_based=1 00:34:48.054 runtime=1 00:34:48.054 ioengine=libaio 00:34:48.054 direct=1 00:34:48.054 bs=4096 00:34:48.054 iodepth=128 00:34:48.054 norandommap=0 00:34:48.054 numjobs=1 00:34:48.054 00:34:48.054 verify_dump=1 00:34:48.054 verify_backlog=512 00:34:48.054 verify_state_save=0 00:34:48.054 do_verify=1 00:34:48.054 verify=crc32c-intel 00:34:48.054 [job0] 00:34:48.054 filename=/dev/nvme0n1 00:34:48.054 [job1] 00:34:48.054 filename=/dev/nvme0n2 00:34:48.054 [job2] 00:34:48.054 filename=/dev/nvme0n3 00:34:48.054 [job3] 00:34:48.054 filename=/dev/nvme0n4 00:34:48.054 Could not set queue depth (nvme0n1) 00:34:48.054 Could not set queue depth (nvme0n2) 00:34:48.054 Could not set queue depth (nvme0n3) 00:34:48.054 Could not set queue depth (nvme0n4) 00:34:48.312 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:48.312 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:48.312 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:48.312 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:48.312 fio-3.35 00:34:48.312 Starting 4 threads 00:34:49.688 00:34:49.688 job0: (groupid=0, jobs=1): err= 0: pid=1931156: Wed Nov 20 08:32:03 2024 00:34:49.688 read: IOPS=5564, BW=21.7MiB/s (22.8MB/s)(21.8MiB/1003msec) 00:34:49.688 slat (nsec): min=1320, max=3732.6k, avg=85889.99, stdev=427121.96 00:34:49.688 clat (usec): min=484, max=15919, avg=10899.94, stdev=1481.79 00:34:49.688 lat (usec): min=3723, max=15929, avg=10985.83, stdev=1491.75 00:34:49.688 clat percentiles (usec): 00:34:49.688 | 1.00th=[ 7439], 5.00th=[ 8455], 10.00th=[ 9241], 20.00th=[ 9765], 00:34:49.688 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10945], 60.00th=[11338], 00:34:49.688 | 70.00th=[11731], 80.00th=[12125], 90.00th=[12649], 95.00th=[13173], 00:34:49.688 | 99.00th=[14353], 99.50th=[14615], 99.90th=[14877], 99.95th=[15139], 00:34:49.688 | 99.99th=[15926] 00:34:49.688 write: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec); 0 zone resets 00:34:49.688 slat (usec): min=2, max=11739, avg=87.63, stdev=460.97 00:34:49.688 clat (usec): min=6875, max=38303, avg=11560.28, stdev=3312.90 00:34:49.688 lat (usec): min=6881, max=38315, avg=11647.91, stdev=3333.10 00:34:49.688 clat percentiles (usec): 00:34:49.688 | 1.00th=[ 8225], 5.00th=[ 9241], 10.00th=[ 9634], 20.00th=[ 9896], 00:34:49.688 | 30.00th=[10290], 40.00th=[10552], 50.00th=[11076], 60.00th=[11600], 00:34:49.688 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12387], 95.00th=[13960], 00:34:49.688 | 99.00th=[28705], 99.50th=[32113], 99.90th=[38011], 99.95th=[38536], 00:34:49.688 | 99.99th=[38536] 00:34:49.688 bw ( KiB/s): min=21168, max=23888, per=29.49%, avg=22528.00, stdev=1923.33, samples=2 00:34:49.688 iops : min= 5292, max= 5972, avg=5632.00, stdev=480.83, samples=2 00:34:49.688 lat (usec) : 500=0.01% 00:34:49.688 lat (msec) : 4=0.21%, 10=22.88%, 20=75.23%, 50=1.68% 00:34:49.688 cpu : usr=3.29%, sys=6.69%, ctx=677, majf=0, minf=1 00:34:49.688 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:34:49.688 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.688 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:49.689 issued rwts: total=5581,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.689 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:49.689 job1: (groupid=0, jobs=1): err= 0: pid=1931159: Wed Nov 20 08:32:03 2024 00:34:49.689 read: IOPS=4079, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1004msec) 00:34:49.689 slat (nsec): min=1338, max=13457k, avg=118050.12, stdev=757952.05 00:34:49.689 clat (usec): min=6830, max=37971, avg=15508.24, stdev=5918.27 00:34:49.689 lat (usec): min=6838, max=37996, avg=15626.29, stdev=5975.54 00:34:49.689 clat percentiles (usec): 00:34:49.689 | 1.00th=[ 8160], 5.00th=[ 9765], 10.00th=[10552], 20.00th=[11338], 00:34:49.689 | 30.00th=[11863], 40.00th=[12387], 50.00th=[12649], 60.00th=[13435], 00:34:49.689 | 70.00th=[16057], 80.00th=[21890], 90.00th=[25560], 95.00th=[27132], 00:34:49.689 | 99.00th=[30802], 99.50th=[33424], 99.90th=[37487], 99.95th=[38011], 00:34:49.689 | 99.99th=[38011] 00:34:49.689 write: IOPS=4368, BW=17.1MiB/s (17.9MB/s)(17.1MiB/1004msec); 0 zone resets 00:34:49.689 slat (usec): min=2, max=9744, avg=110.88, stdev=707.11 00:34:49.689 clat (usec): min=2909, max=49532, avg=14491.27, stdev=6224.19 00:34:49.689 lat (usec): min=3660, max=49545, avg=14602.15, stdev=6290.00 00:34:49.689 clat percentiles (usec): 00:34:49.689 | 1.00th=[ 5997], 5.00th=[ 8225], 10.00th=[10159], 20.00th=[11207], 00:34:49.689 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12649], 00:34:49.689 | 70.00th=[15139], 80.00th=[17957], 90.00th=[20317], 95.00th=[25035], 00:34:49.689 | 99.00th=[42730], 99.50th=[46400], 99.90th=[48497], 99.95th=[49546], 00:34:49.689 | 99.99th=[49546] 00:34:49.689 bw ( KiB/s): min=16384, max=17688, per=22.30%, avg=17036.00, stdev=922.07, samples=2 00:34:49.689 iops : min= 4096, max= 4422, avg=4259.00, stdev=230.52, samples=2 00:34:49.689 lat (msec) : 4=0.11%, 10=7.38%, 20=74.53%, 50=17.98% 00:34:49.689 cpu : usr=4.69%, sys=5.38%, ctx=332, majf=0, minf=1 00:34:49.689 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:34:49.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:49.689 issued rwts: total=4096,4386,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.689 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:49.689 job2: (groupid=0, jobs=1): err= 0: pid=1931160: Wed Nov 20 08:32:03 2024 00:34:49.689 read: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec) 00:34:49.689 slat (nsec): min=1242, max=11987k, avg=126038.57, stdev=844748.23 00:34:49.689 clat (usec): min=8761, max=74628, avg=18179.60, stdev=9553.86 00:34:49.689 lat (usec): min=8770, max=74651, avg=18305.64, stdev=9603.63 00:34:49.689 clat percentiles (usec): 00:34:49.689 | 1.00th=[ 9503], 5.00th=[10814], 10.00th=[11600], 20.00th=[12125], 00:34:49.689 | 30.00th=[13435], 40.00th=[14222], 50.00th=[15139], 60.00th=[16712], 00:34:49.689 | 70.00th=[18482], 80.00th=[22152], 90.00th=[27132], 95.00th=[32637], 00:34:49.689 | 99.00th=[63701], 99.50th=[63701], 99.90th=[74974], 99.95th=[74974], 00:34:49.689 | 99.99th=[74974] 00:34:49.689 write: IOPS=4033, BW=15.8MiB/s (16.5MB/s)(15.8MiB/1005msec); 0 zone resets 00:34:49.689 slat (usec): min=2, max=41753, avg=128.70, stdev=1083.32 00:34:49.689 clat (usec): min=461, max=33601, avg=15356.99, stdev=4371.44 00:34:49.689 lat (usec): min=717, max=59289, avg=15485.69, stdev=4498.67 00:34:49.689 clat percentiles (usec): 00:34:49.689 | 1.00th=[ 5997], 5.00th=[ 9896], 10.00th=[11338], 20.00th=[12911], 00:34:49.689 | 30.00th=[13042], 40.00th=[13173], 50.00th=[13435], 60.00th=[15008], 00:34:49.689 | 70.00th=[17957], 80.00th=[19006], 90.00th=[20841], 95.00th=[23200], 00:34:49.689 | 99.00th=[29230], 99.50th=[29492], 99.90th=[30540], 99.95th=[31065], 00:34:49.689 | 99.99th=[33817] 00:34:49.689 bw ( KiB/s): min=12288, max=19120, per=20.56%, avg=15704.00, stdev=4830.95, samples=2 00:34:49.689 iops : min= 3072, max= 4780, avg=3926.00, stdev=1207.74, samples=2 00:34:49.689 lat (usec) : 500=0.01%, 750=0.04%, 1000=0.01% 00:34:49.689 lat (msec) : 4=0.31%, 10=3.31%, 20=75.57%, 50=19.08%, 100=1.66% 00:34:49.689 cpu : usr=3.39%, sys=6.18%, ctx=246, majf=0, minf=1 00:34:49.689 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:34:49.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:49.689 issued rwts: total=3584,4054,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.689 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:49.689 job3: (groupid=0, jobs=1): err= 0: pid=1931161: Wed Nov 20 08:32:03 2024 00:34:49.689 read: IOPS=4993, BW=19.5MiB/s (20.5MB/s)(19.6MiB/1003msec) 00:34:49.689 slat (nsec): min=1104, max=13220k, avg=99615.33, stdev=576907.53 00:34:49.689 clat (usec): min=741, max=33140, avg=12660.06, stdev=2557.75 00:34:49.689 lat (usec): min=3698, max=33148, avg=12759.68, stdev=2592.43 00:34:49.689 clat percentiles (usec): 00:34:49.689 | 1.00th=[ 6980], 5.00th=[ 9110], 10.00th=[10159], 20.00th=[11076], 00:34:49.689 | 30.00th=[11600], 40.00th=[11994], 50.00th=[12518], 60.00th=[12911], 00:34:49.689 | 70.00th=[13435], 80.00th=[13960], 90.00th=[15008], 95.00th=[16057], 00:34:49.689 | 99.00th=[22414], 99.50th=[25035], 99.90th=[28181], 99.95th=[29754], 00:34:49.689 | 99.99th=[33162] 00:34:49.689 write: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec); 0 zone resets 00:34:49.689 slat (nsec): min=1888, max=8852.7k, avg=91915.44, stdev=501513.19 00:34:49.689 clat (usec): min=701, max=24162, avg=12410.10, stdev=1795.09 00:34:49.689 lat (usec): min=709, max=24172, avg=12502.02, stdev=1828.70 00:34:49.689 clat percentiles (usec): 00:34:49.689 | 1.00th=[ 6718], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[11469], 00:34:49.689 | 30.00th=[11600], 40.00th=[11863], 50.00th=[12125], 60.00th=[12780], 00:34:49.689 | 70.00th=[13435], 80.00th=[13829], 90.00th=[14222], 95.00th=[14615], 00:34:49.689 | 99.00th=[16712], 99.50th=[17695], 99.90th=[18482], 99.95th=[19530], 00:34:49.689 | 99.99th=[24249] 00:34:49.689 bw ( KiB/s): min=19248, max=21712, per=26.81%, avg=20480.00, stdev=1742.31, samples=2 00:34:49.689 iops : min= 4812, max= 5428, avg=5120.00, stdev=435.58, samples=2 00:34:49.689 lat (usec) : 750=0.05% 00:34:49.689 lat (msec) : 2=0.15%, 4=0.28%, 10=6.65%, 20=91.85%, 50=1.02% 00:34:49.689 cpu : usr=4.49%, sys=6.09%, ctx=490, majf=0, minf=1 00:34:49.689 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:49.689 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.689 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:49.689 issued rwts: total=5008,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.689 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:49.689 00:34:49.689 Run status group 0 (all jobs): 00:34:49.689 READ: bw=71.0MiB/s (74.5MB/s), 13.9MiB/s-21.7MiB/s (14.6MB/s-22.8MB/s), io=71.4MiB (74.8MB), run=1003-1005msec 00:34:49.689 WRITE: bw=74.6MiB/s (78.2MB/s), 15.8MiB/s-21.9MiB/s (16.5MB/s-23.0MB/s), io=75.0MiB (78.6MB), run=1003-1005msec 00:34:49.689 00:34:49.689 Disk stats (read/write): 00:34:49.689 nvme0n1: ios=4660/5017, merge=0/0, ticks=15163/17821, in_queue=32984, util=98.40% 00:34:49.689 nvme0n2: ios=3368/3584, merge=0/0, ticks=23362/23732, in_queue=47094, util=98.07% 00:34:49.689 nvme0n3: ios=3110/3126, merge=0/0, ticks=26616/25014, in_queue=51630, util=95.53% 00:34:49.689 nvme0n4: ios=4188/4608, merge=0/0, ticks=22926/19935, in_queue=42861, util=98.12% 00:34:49.689 08:32:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:34:49.689 [global] 00:34:49.689 thread=1 00:34:49.689 invalidate=1 00:34:49.689 rw=randwrite 00:34:49.689 time_based=1 00:34:49.689 runtime=1 00:34:49.689 ioengine=libaio 00:34:49.689 direct=1 00:34:49.689 bs=4096 00:34:49.689 iodepth=128 00:34:49.689 norandommap=0 00:34:49.689 numjobs=1 00:34:49.689 00:34:49.689 verify_dump=1 00:34:49.689 verify_backlog=512 00:34:49.689 verify_state_save=0 00:34:49.689 do_verify=1 00:34:49.689 verify=crc32c-intel 00:34:49.689 [job0] 00:34:49.689 filename=/dev/nvme0n1 00:34:49.689 [job1] 00:34:49.689 filename=/dev/nvme0n2 00:34:49.689 [job2] 00:34:49.689 filename=/dev/nvme0n3 00:34:49.689 [job3] 00:34:49.689 filename=/dev/nvme0n4 00:34:49.689 Could not set queue depth (nvme0n1) 00:34:49.689 Could not set queue depth (nvme0n2) 00:34:49.689 Could not set queue depth (nvme0n3) 00:34:49.689 Could not set queue depth (nvme0n4) 00:34:49.957 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:49.957 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:49.957 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:49.957 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:34:49.957 fio-3.35 00:34:49.957 Starting 4 threads 00:34:51.349 00:34:51.350 job0: (groupid=0, jobs=1): err= 0: pid=1931534: Wed Nov 20 08:32:04 2024 00:34:51.350 read: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec) 00:34:51.350 slat (nsec): min=1194, max=43672k, avg=98912.94, stdev=820047.25 00:34:51.350 clat (usec): min=1978, max=57333, avg=12495.33, stdev=6041.36 00:34:51.350 lat (usec): min=1982, max=57337, avg=12594.25, stdev=6077.35 00:34:51.350 clat percentiles (usec): 00:34:51.350 | 1.00th=[ 7701], 5.00th=[ 8848], 10.00th=[10028], 20.00th=[10683], 00:34:51.350 | 30.00th=[11076], 40.00th=[11600], 50.00th=[11863], 60.00th=[11994], 00:34:51.350 | 70.00th=[12256], 80.00th=[12649], 90.00th=[13304], 95.00th=[15008], 00:34:51.350 | 99.00th=[53740], 99.50th=[55313], 99.90th=[57410], 99.95th=[57410], 00:34:51.350 | 99.99th=[57410] 00:34:51.350 write: IOPS=4830, BW=18.9MiB/s (19.8MB/s)(18.9MiB/1004msec); 0 zone resets 00:34:51.350 slat (nsec): min=1902, max=21840k, avg=100644.66, stdev=670873.18 00:34:51.350 clat (usec): min=1736, max=60410, avg=14332.16, stdev=10165.21 00:34:51.350 lat (usec): min=1742, max=60421, avg=14432.81, stdev=10209.76 00:34:51.350 clat percentiles (usec): 00:34:51.350 | 1.00th=[ 7111], 5.00th=[ 8455], 10.00th=[ 9765], 20.00th=[10421], 00:34:51.350 | 30.00th=[11207], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:34:51.350 | 70.00th=[12125], 80.00th=[12387], 90.00th=[18482], 95.00th=[42730], 00:34:51.350 | 99.00th=[58459], 99.50th=[60031], 99.90th=[60556], 99.95th=[60556], 00:34:51.350 | 99.99th=[60556] 00:34:51.350 bw ( KiB/s): min=18456, max=19328, per=24.06%, avg=18892.00, stdev=616.60, samples=2 00:34:51.350 iops : min= 4614, max= 4832, avg=4723.00, stdev=154.15, samples=2 00:34:51.350 lat (msec) : 2=0.15%, 4=0.16%, 10=10.22%, 20=84.09%, 50=2.52% 00:34:51.350 lat (msec) : 100=2.87% 00:34:51.350 cpu : usr=3.69%, sys=4.39%, ctx=521, majf=0, minf=1 00:34:51.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:51.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:51.350 issued rwts: total=4608,4850,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:51.350 job1: (groupid=0, jobs=1): err= 0: pid=1931535: Wed Nov 20 08:32:04 2024 00:34:51.350 read: IOPS=4097, BW=16.0MiB/s (16.8MB/s)(16.1MiB/1004msec) 00:34:51.350 slat (nsec): min=1568, max=18677k, avg=105137.78, stdev=651474.12 00:34:51.350 clat (usec): min=3440, max=53889, avg=13146.03, stdev=6297.17 00:34:51.350 lat (usec): min=4155, max=53895, avg=13251.17, stdev=6339.72 00:34:51.350 clat percentiles (usec): 00:34:51.350 | 1.00th=[ 8225], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[11076], 00:34:51.350 | 30.00th=[11600], 40.00th=[11731], 50.00th=[11863], 60.00th=[11994], 00:34:51.350 | 70.00th=[12256], 80.00th=[12649], 90.00th=[14353], 95.00th=[19268], 00:34:51.350 | 99.00th=[50594], 99.50th=[51643], 99.90th=[53740], 99.95th=[53740], 00:34:51.350 | 99.99th=[53740] 00:34:51.350 write: IOPS=4589, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1004msec); 0 zone resets 00:34:51.350 slat (usec): min=2, max=18481, avg=117.09, stdev=663.80 00:34:51.350 clat (msec): min=5, max=101, avg=15.77, stdev=13.04 00:34:51.350 lat (msec): min=5, max=101, avg=15.89, stdev=13.12 00:34:51.350 clat percentiles (msec): 00:34:51.350 | 1.00th=[ 9], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 11], 00:34:51.350 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 12], 60.00th=[ 12], 00:34:51.350 | 70.00th=[ 13], 80.00th=[ 15], 90.00th=[ 23], 95.00th=[ 40], 00:34:51.350 | 99.00th=[ 90], 99.50th=[ 100], 99.90th=[ 102], 99.95th=[ 102], 00:34:51.350 | 99.99th=[ 102] 00:34:51.350 bw ( KiB/s): min=17608, max=18384, per=22.92%, avg=17996.00, stdev=548.71, samples=2 00:34:51.350 iops : min= 4402, max= 4596, avg=4499.00, stdev=137.18, samples=2 00:34:51.350 lat (msec) : 4=0.01%, 10=4.43%, 20=85.19%, 50=8.04%, 100=2.10% 00:34:51.350 lat (msec) : 250=0.24% 00:34:51.350 cpu : usr=3.59%, sys=5.28%, ctx=518, majf=0, minf=1 00:34:51.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:34:51.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:51.350 issued rwts: total=4114,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:51.350 job2: (groupid=0, jobs=1): err= 0: pid=1931536: Wed Nov 20 08:32:04 2024 00:34:51.350 read: IOPS=5084, BW=19.9MiB/s (20.8MB/s)(20.0MiB/1007msec) 00:34:51.350 slat (nsec): min=1364, max=12428k, avg=103953.25, stdev=870647.63 00:34:51.350 clat (usec): min=5154, max=25869, avg=13106.93, stdev=3452.28 00:34:51.350 lat (usec): min=7337, max=31874, avg=13210.88, stdev=3546.36 00:34:51.350 clat percentiles (usec): 00:34:51.350 | 1.00th=[ 7898], 5.00th=[ 8225], 10.00th=[ 9372], 20.00th=[10814], 00:34:51.350 | 30.00th=[11338], 40.00th=[11994], 50.00th=[12387], 60.00th=[13042], 00:34:51.350 | 70.00th=[13698], 80.00th=[14877], 90.00th=[18220], 95.00th=[21365], 00:34:51.350 | 99.00th=[23725], 99.50th=[24249], 99.90th=[25297], 99.95th=[25297], 00:34:51.350 | 99.99th=[25822] 00:34:51.350 write: IOPS=5283, BW=20.6MiB/s (21.6MB/s)(20.8MiB/1007msec); 0 zone resets 00:34:51.350 slat (usec): min=2, max=11220, avg=82.55, stdev=625.55 00:34:51.350 clat (usec): min=1494, max=24791, avg=11390.40, stdev=3038.47 00:34:51.350 lat (usec): min=1510, max=24795, avg=11472.95, stdev=3066.37 00:34:51.350 clat percentiles (usec): 00:34:51.350 | 1.00th=[ 3654], 5.00th=[ 6915], 10.00th=[ 7242], 20.00th=[ 9110], 00:34:51.350 | 30.00th=[10290], 40.00th=[10945], 50.00th=[11076], 60.00th=[11731], 00:34:51.350 | 70.00th=[12911], 80.00th=[13566], 90.00th=[14615], 95.00th=[17433], 00:34:51.350 | 99.00th=[19792], 99.50th=[19792], 99.90th=[24773], 99.95th=[24773], 00:34:51.350 | 99.99th=[24773] 00:34:51.350 bw ( KiB/s): min=20304, max=21416, per=26.56%, avg=20860.00, stdev=786.30, samples=2 00:34:51.350 iops : min= 5076, max= 5354, avg=5215.00, stdev=196.58, samples=2 00:34:51.350 lat (msec) : 2=0.18%, 4=0.47%, 10=19.70%, 20=76.34%, 50=3.30% 00:34:51.350 cpu : usr=3.28%, sys=7.16%, ctx=346, majf=0, minf=2 00:34:51.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:34:51.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:51.350 issued rwts: total=5120,5320,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:51.350 job3: (groupid=0, jobs=1): err= 0: pid=1931537: Wed Nov 20 08:32:04 2024 00:34:51.350 read: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec) 00:34:51.350 slat (nsec): min=1153, max=10647k, avg=95992.25, stdev=678955.96 00:34:51.350 clat (usec): min=3692, max=27583, avg=13305.71, stdev=2946.06 00:34:51.350 lat (usec): min=3703, max=27588, avg=13401.71, stdev=2983.58 00:34:51.350 clat percentiles (usec): 00:34:51.350 | 1.00th=[ 8291], 5.00th=[ 9241], 10.00th=[10159], 20.00th=[11338], 00:34:51.350 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12649], 60.00th=[13566], 00:34:51.350 | 70.00th=[14746], 80.00th=[15270], 90.00th=[16581], 95.00th=[18220], 00:34:51.350 | 99.00th=[24249], 99.50th=[25297], 99.90th=[27657], 99.95th=[27657], 00:34:51.350 | 99.99th=[27657] 00:34:51.350 write: IOPS=4961, BW=19.4MiB/s (20.3MB/s)(19.5MiB/1006msec); 0 zone resets 00:34:51.350 slat (usec): min=2, max=11254, avg=98.05, stdev=642.40 00:34:51.350 clat (usec): min=609, max=28112, avg=13145.79, stdev=3659.25 00:34:51.350 lat (usec): min=620, max=28121, avg=13243.84, stdev=3713.87 00:34:51.350 clat percentiles (usec): 00:34:51.350 | 1.00th=[ 4293], 5.00th=[ 7635], 10.00th=[ 8848], 20.00th=[10814], 00:34:51.350 | 30.00th=[11600], 40.00th=[13042], 50.00th=[13304], 60.00th=[13566], 00:34:51.350 | 70.00th=[13829], 80.00th=[14091], 90.00th=[17695], 95.00th=[21365], 00:34:51.350 | 99.00th=[24249], 99.50th=[25822], 99.90th=[27132], 99.95th=[27657], 00:34:51.350 | 99.99th=[28181] 00:34:51.350 bw ( KiB/s): min=19280, max=19632, per=24.78%, avg=19456.00, stdev=248.90, samples=2 00:34:51.350 iops : min= 4820, max= 4908, avg=4864.00, stdev=62.23, samples=2 00:34:51.350 lat (usec) : 750=0.04% 00:34:51.350 lat (msec) : 4=0.11%, 10=13.22%, 20=81.36%, 50=5.26% 00:34:51.350 cpu : usr=4.28%, sys=5.57%, ctx=366, majf=0, minf=1 00:34:51.350 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:34:51.350 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:51.350 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:34:51.350 issued rwts: total=4608,4991,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:51.350 latency : target=0, window=0, percentile=100.00%, depth=128 00:34:51.350 00:34:51.350 Run status group 0 (all jobs): 00:34:51.350 READ: bw=71.6MiB/s (75.0MB/s), 16.0MiB/s-19.9MiB/s (16.8MB/s-20.8MB/s), io=72.1MiB (75.6MB), run=1004-1007msec 00:34:51.350 WRITE: bw=76.7MiB/s (80.4MB/s), 17.9MiB/s-20.6MiB/s (18.8MB/s-21.6MB/s), io=77.2MiB (81.0MB), run=1004-1007msec 00:34:51.350 00:34:51.350 Disk stats (read/write): 00:34:51.350 nvme0n1: ios=3836/4096, merge=0/0, ticks=16345/21127, in_queue=37472, util=90.88% 00:34:51.350 nvme0n2: ios=3563/3584, merge=0/0, ticks=15846/18308, in_queue=34154, util=98.58% 00:34:51.350 nvme0n3: ios=4369/4608, merge=0/0, ticks=54516/50570, in_queue=105086, util=90.97% 00:34:51.350 nvme0n4: ios=4143/4099, merge=0/0, ticks=39711/39639, in_queue=79350, util=98.22% 00:34:51.350 08:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:34:51.350 08:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=1931695 00:34:51.350 08:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:34:51.350 08:32:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:34:51.350 [global] 00:34:51.350 thread=1 00:34:51.350 invalidate=1 00:34:51.350 rw=read 00:34:51.350 time_based=1 00:34:51.350 runtime=10 00:34:51.350 ioengine=libaio 00:34:51.350 direct=1 00:34:51.350 bs=4096 00:34:51.350 iodepth=1 00:34:51.350 norandommap=1 00:34:51.350 numjobs=1 00:34:51.350 00:34:51.350 [job0] 00:34:51.350 filename=/dev/nvme0n1 00:34:51.350 [job1] 00:34:51.350 filename=/dev/nvme0n2 00:34:51.350 [job2] 00:34:51.350 filename=/dev/nvme0n3 00:34:51.350 [job3] 00:34:51.350 filename=/dev/nvme0n4 00:34:51.350 Could not set queue depth (nvme0n1) 00:34:51.351 Could not set queue depth (nvme0n2) 00:34:51.351 Could not set queue depth (nvme0n3) 00:34:51.351 Could not set queue depth (nvme0n4) 00:34:51.609 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:51.609 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:51.609 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:51.609 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:34:51.609 fio-3.35 00:34:51.609 Starting 4 threads 00:34:54.143 08:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:34:54.402 08:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:34:54.402 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=15343616, buflen=4096 00:34:54.402 fio: pid=1931905, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:54.661 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=47370240, buflen=4096 00:34:54.661 fio: pid=1931904, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:54.661 08:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:54.661 08:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:34:54.661 08:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:54.661 08:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:34:54.661 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=1036288, buflen=4096 00:34:54.661 fio: pid=1931902, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:54.921 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=41037824, buflen=4096 00:34:54.921 fio: pid=1931903, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:34:54.921 08:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:54.921 08:32:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:34:54.921 00:34:54.921 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1931902: Wed Nov 20 08:32:08 2024 00:34:54.921 read: IOPS=81, BW=323KiB/s (331kB/s)(1012KiB/3132msec) 00:34:54.921 slat (usec): min=8, max=9868, avg=51.33, stdev=618.47 00:34:54.921 clat (usec): min=213, max=41906, avg=12237.72, stdev=18502.78 00:34:54.921 lat (usec): min=223, max=51058, avg=12289.16, stdev=18575.76 00:34:54.921 clat percentiles (usec): 00:34:54.921 | 1.00th=[ 235], 5.00th=[ 306], 10.00th=[ 330], 20.00th=[ 347], 00:34:54.921 | 30.00th=[ 355], 40.00th=[ 363], 50.00th=[ 371], 60.00th=[ 392], 00:34:54.921 | 70.00th=[ 515], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:54.921 | 99.00th=[41157], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:34:54.921 | 99.99th=[41681] 00:34:54.921 bw ( KiB/s): min= 96, max= 936, per=1.04%, avg=319.33, stdev=336.56, samples=6 00:34:54.921 iops : min= 24, max= 234, avg=79.83, stdev=84.14, samples=6 00:34:54.921 lat (usec) : 250=1.18%, 500=67.32%, 750=1.97% 00:34:54.921 lat (msec) : 50=29.13% 00:34:54.921 cpu : usr=0.16%, sys=0.06%, ctx=255, majf=0, minf=1 00:34:54.921 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:54.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.921 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.921 issued rwts: total=254,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.921 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:54.921 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1931903: Wed Nov 20 08:32:08 2024 00:34:54.921 read: IOPS=3011, BW=11.8MiB/s (12.3MB/s)(39.1MiB/3327msec) 00:34:54.921 slat (usec): min=6, max=29764, avg=13.26, stdev=343.32 00:34:54.921 clat (usec): min=169, max=42240, avg=314.64, stdev=2043.07 00:34:54.921 lat (usec): min=179, max=70977, avg=327.89, stdev=2172.88 00:34:54.921 clat percentiles (usec): 00:34:54.921 | 1.00th=[ 190], 5.00th=[ 196], 10.00th=[ 198], 20.00th=[ 202], 00:34:54.921 | 30.00th=[ 204], 40.00th=[ 206], 50.00th=[ 208], 60.00th=[ 210], 00:34:54.921 | 70.00th=[ 212], 80.00th=[ 219], 90.00th=[ 235], 95.00th=[ 249], 00:34:54.921 | 99.00th=[ 269], 99.50th=[ 314], 99.90th=[41157], 99.95th=[41157], 00:34:54.921 | 99.99th=[42206] 00:34:54.921 bw ( KiB/s): min= 3661, max=18608, per=43.00%, avg=13227.50, stdev=6586.65, samples=6 00:34:54.921 iops : min= 915, max= 4652, avg=3306.83, stdev=1646.73, samples=6 00:34:54.921 lat (usec) : 250=95.96%, 500=3.73% 00:34:54.921 lat (msec) : 2=0.03%, 4=0.02%, 50=0.25% 00:34:54.921 cpu : usr=1.68%, sys=4.75%, ctx=10023, majf=0, minf=2 00:34:54.921 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:54.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.921 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.921 issued rwts: total=10020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.921 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:54.921 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1931904: Wed Nov 20 08:32:08 2024 00:34:54.921 read: IOPS=3958, BW=15.5MiB/s (16.2MB/s)(45.2MiB/2922msec) 00:34:54.921 slat (usec): min=7, max=11570, avg=10.60, stdev=148.66 00:34:54.921 clat (usec): min=179, max=2084, avg=238.18, stdev=43.70 00:34:54.921 lat (usec): min=187, max=11973, avg=248.78, stdev=157.14 00:34:54.921 clat percentiles (usec): 00:34:54.921 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 206], 20.00th=[ 219], 00:34:54.921 | 30.00th=[ 233], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 247], 00:34:54.921 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 258], 00:34:54.921 | 99.00th=[ 269], 99.50th=[ 289], 99.90th=[ 498], 99.95th=[ 1549], 00:34:54.921 | 99.99th=[ 1975] 00:34:54.921 bw ( KiB/s): min=15192, max=17024, per=51.23%, avg=15758.40, stdev=726.14, samples=5 00:34:54.921 iops : min= 3798, max= 4256, avg=3939.60, stdev=181.53, samples=5 00:34:54.921 lat (usec) : 250=77.21%, 500=22.70%, 750=0.02% 00:34:54.921 lat (msec) : 2=0.06%, 4=0.01% 00:34:54.921 cpu : usr=2.12%, sys=6.61%, ctx=11570, majf=0, minf=2 00:34:54.921 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:54.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.921 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.921 issued rwts: total=11566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.921 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:54.921 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=1931905: Wed Nov 20 08:32:08 2024 00:34:54.921 read: IOPS=1373, BW=5491KiB/s (5622kB/s)(14.6MiB/2729msec) 00:34:54.921 slat (nsec): min=6522, max=33528, avg=7674.72, stdev=1974.23 00:34:54.921 clat (usec): min=188, max=41435, avg=713.80, stdev=4340.91 00:34:54.921 lat (usec): min=196, max=41447, avg=721.47, stdev=4342.60 00:34:54.921 clat percentiles (usec): 00:34:54.921 | 1.00th=[ 225], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 237], 00:34:54.921 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:34:54.921 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 262], 95.00th=[ 265], 00:34:54.921 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:34:54.921 | 99.99th=[41681] 00:34:54.921 bw ( KiB/s): min= 96, max=15864, per=18.65%, avg=5736.00, stdev=7809.25, samples=5 00:34:54.921 iops : min= 24, max= 3966, avg=1434.00, stdev=1952.31, samples=5 00:34:54.921 lat (usec) : 250=68.27%, 500=30.53%, 750=0.03% 00:34:54.921 lat (msec) : 50=1.15% 00:34:54.921 cpu : usr=0.26%, sys=1.43%, ctx=3747, majf=0, minf=2 00:34:54.921 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:54.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.921 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:54.921 issued rwts: total=3747,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:54.921 latency : target=0, window=0, percentile=100.00%, depth=1 00:34:54.922 00:34:54.922 Run status group 0 (all jobs): 00:34:54.922 READ: bw=30.0MiB/s (31.5MB/s), 323KiB/s-15.5MiB/s (331kB/s-16.2MB/s), io=99.9MiB (105MB), run=2729-3327msec 00:34:54.922 00:34:54.922 Disk stats (read/write): 00:34:54.922 nvme0n1: ios=252/0, merge=0/0, ticks=3054/0, in_queue=3054, util=95.47% 00:34:54.922 nvme0n2: ios=10015/0, merge=0/0, ticks=2888/0, in_queue=2888, util=95.55% 00:34:54.922 nvme0n3: ios=11400/0, merge=0/0, ticks=3503/0, in_queue=3503, util=98.95% 00:34:54.922 nvme0n4: ios=3743/0, merge=0/0, ticks=2540/0, in_queue=2540, util=96.45% 00:34:55.181 08:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:55.181 08:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:34:55.439 08:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:55.439 08:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:34:55.697 08:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:55.697 08:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:34:55.697 08:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:34:55.697 08:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:34:55.956 08:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:34:55.956 08:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 1931695 00:34:55.956 08:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:34:55.956 08:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:34:56.215 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:34:56.215 08:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:34:56.215 08:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:34:56.215 08:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:34:56.215 08:32:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:56.215 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:34:56.215 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:34:56.215 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:34:56.215 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:34:56.215 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:34:56.215 nvmf hotplug test: fio failed as expected 00:34:56.215 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:56.215 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:34:56.215 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:34:56.215 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:34:56.215 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:34:56.215 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:34:56.215 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:34:56.215 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:34:56.475 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:34:56.475 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:34:56.475 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:34:56.475 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:34:56.475 rmmod nvme_tcp 00:34:56.475 rmmod nvme_fabrics 00:34:56.475 rmmod nvme_keyring 00:34:56.475 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:34:56.475 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:34:56.475 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:34:56.475 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 1929080 ']' 00:34:56.475 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 1929080 00:34:56.475 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 1929080 ']' 00:34:56.475 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 1929080 00:34:56.475 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:34:56.475 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:56.475 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1929080 00:34:56.475 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:56.475 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:56.475 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1929080' 00:34:56.475 killing process with pid 1929080 00:34:56.475 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 1929080 00:34:56.475 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 1929080 00:34:56.735 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:34:56.735 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:34:56.735 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@254 -- # local dev 00:34:56.735 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@257 -- # remove_target_ns 00:34:56.735 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:56.735 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:56.735 08:32:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@258 -- # delete_main_bridge 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@121 -- # return 0 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@274 -- # iptr 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-save 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@548 -- # iptables-restore 00:34:58.642 00:34:58.642 real 0m26.014s 00:34:58.642 user 1m32.668s 00:34:58.642 sys 0m11.220s 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:34:58.642 ************************************ 00:34:58.642 END TEST nvmf_fio_target 00:34:58.642 ************************************ 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:58.642 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:34:58.902 ************************************ 00:34:58.902 START TEST nvmf_bdevio 00:34:58.902 ************************************ 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:34:58.902 * Looking for test storage... 00:34:58.902 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:58.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.902 --rc genhtml_branch_coverage=1 00:34:58.902 --rc genhtml_function_coverage=1 00:34:58.902 --rc genhtml_legend=1 00:34:58.902 --rc geninfo_all_blocks=1 00:34:58.902 --rc geninfo_unexecuted_blocks=1 00:34:58.902 00:34:58.902 ' 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:58.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.902 --rc genhtml_branch_coverage=1 00:34:58.902 --rc genhtml_function_coverage=1 00:34:58.902 --rc genhtml_legend=1 00:34:58.902 --rc geninfo_all_blocks=1 00:34:58.902 --rc geninfo_unexecuted_blocks=1 00:34:58.902 00:34:58.902 ' 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:58.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.902 --rc genhtml_branch_coverage=1 00:34:58.902 --rc genhtml_function_coverage=1 00:34:58.902 --rc genhtml_legend=1 00:34:58.902 --rc geninfo_all_blocks=1 00:34:58.902 --rc geninfo_unexecuted_blocks=1 00:34:58.902 00:34:58.902 ' 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:58.902 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:58.902 --rc genhtml_branch_coverage=1 00:34:58.902 --rc genhtml_function_coverage=1 00:34:58.902 --rc genhtml_legend=1 00:34:58.902 --rc geninfo_all_blocks=1 00:34:58.902 --rc geninfo_unexecuted_blocks=1 00:34:58.902 00:34:58.902 ' 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.902 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # xtrace_disable 00:34:58.903 08:32:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@131 -- # pci_devs=() 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@131 -- # local -a pci_devs 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@132 -- # pci_net_devs=() 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@133 -- # pci_drivers=() 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@133 -- # local -A pci_drivers 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@135 -- # net_devs=() 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@135 -- # local -ga net_devs 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@136 -- # e810=() 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@136 -- # local -ga e810 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@137 -- # x722=() 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@137 -- # local -ga x722 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@138 -- # mlx=() 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@138 -- # local -ga mlx 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:05.475 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:05.475 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:05.475 Found net devices under 0000:86:00.0: cvl_0_0 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:05.475 Found net devices under 0000:86:00.1: cvl_0_1 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # is_hw=yes 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:35:05.475 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@247 -- # create_target_ns 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:35:05.476 10.0.0.1 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:35:05.476 10.0.0.2 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 1 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:05.476 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:35:05.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:05.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.502 ms 00:35:05.477 00:35:05.477 --- 10.0.0.1 ping statistics --- 00:35:05.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.477 rtt min/avg/max/mdev = 0.502/0.502/0.502/0.000 ms 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:35:05.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:05.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:35:05.477 00:35:05.477 --- 10.0.0.2 ping statistics --- 00:35:05.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:05.477 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair++ )) 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@270 -- # return 0 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=initiator1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # return 1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev= 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@160 -- # return 0 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target0 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # get_net_dev target1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # local dev=target1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # return 1 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@159 -- # dev= 00:35:05.477 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@160 -- # return 0 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:35:05.478 ' 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=1936168 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 1936168 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 1936168 ']' 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:05.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:05.478 08:32:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.478 [2024-11-20 08:32:18.947121] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:05.478 [2024-11-20 08:32:18.948061] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:35:05.478 [2024-11-20 08:32:18.948099] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:05.478 [2024-11-20 08:32:19.027571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:05.478 [2024-11-20 08:32:19.069003] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:05.478 [2024-11-20 08:32:19.069040] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:05.478 [2024-11-20 08:32:19.069047] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:05.478 [2024-11-20 08:32:19.069053] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:05.478 [2024-11-20 08:32:19.069058] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:05.478 [2024-11-20 08:32:19.070527] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:35:05.478 [2024-11-20 08:32:19.070636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:35:05.478 [2024-11-20 08:32:19.070743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:05.478 [2024-11-20 08:32:19.070744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:35:05.478 [2024-11-20 08:32:19.137075] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:05.478 [2024-11-20 08:32:19.138090] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:35:05.478 [2024-11-20 08:32:19.138161] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:05.478 [2024-11-20 08:32:19.138570] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:05.478 [2024-11-20 08:32:19.138607] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.478 [2024-11-20 08:32:19.203609] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.478 Malloc0 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:05.478 [2024-11-20 08:32:19.283654] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:35:05.478 { 00:35:05.478 "params": { 00:35:05.478 "name": "Nvme$subsystem", 00:35:05.478 "trtype": "$TEST_TRANSPORT", 00:35:05.478 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:05.478 "adrfam": "ipv4", 00:35:05.478 "trsvcid": "$NVMF_PORT", 00:35:05.478 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:05.478 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:05.478 "hdgst": ${hdgst:-false}, 00:35:05.478 "ddgst": ${ddgst:-false} 00:35:05.478 }, 00:35:05.478 "method": "bdev_nvme_attach_controller" 00:35:05.478 } 00:35:05.478 EOF 00:35:05.478 )") 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:35:05.478 08:32:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:35:05.478 "params": { 00:35:05.478 "name": "Nvme1", 00:35:05.478 "trtype": "tcp", 00:35:05.478 "traddr": "10.0.0.2", 00:35:05.478 "adrfam": "ipv4", 00:35:05.478 "trsvcid": "4420", 00:35:05.478 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:05.478 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:05.478 "hdgst": false, 00:35:05.478 "ddgst": false 00:35:05.478 }, 00:35:05.478 "method": "bdev_nvme_attach_controller" 00:35:05.478 }' 00:35:05.478 [2024-11-20 08:32:19.337198] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:35:05.478 [2024-11-20 08:32:19.337254] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1936200 ] 00:35:05.478 [2024-11-20 08:32:19.414724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:35:05.478 [2024-11-20 08:32:19.458669] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.478 [2024-11-20 08:32:19.458777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:05.478 [2024-11-20 08:32:19.458778] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:05.738 I/O targets: 00:35:05.738 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:35:05.738 00:35:05.738 00:35:05.738 CUnit - A unit testing framework for C - Version 2.1-3 00:35:05.738 http://cunit.sourceforge.net/ 00:35:05.738 00:35:05.738 00:35:05.738 Suite: bdevio tests on: Nvme1n1 00:35:05.738 Test: blockdev write read block ...passed 00:35:05.738 Test: blockdev write zeroes read block ...passed 00:35:05.738 Test: blockdev write zeroes read no split ...passed 00:35:05.997 Test: blockdev write zeroes read split ...passed 00:35:05.997 Test: blockdev write zeroes read split partial ...passed 00:35:05.997 Test: blockdev reset ...[2024-11-20 08:32:19.839767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:35:05.997 [2024-11-20 08:32:19.839830] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x22f3340 (9): Bad file descriptor 00:35:05.997 [2024-11-20 08:32:19.932411] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:35:05.997 passed 00:35:05.997 Test: blockdev write read 8 blocks ...passed 00:35:05.997 Test: blockdev write read size > 128k ...passed 00:35:05.997 Test: blockdev write read invalid size ...passed 00:35:05.997 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:35:05.997 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:35:05.997 Test: blockdev write read max offset ...passed 00:35:06.256 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:35:06.256 Test: blockdev writev readv 8 blocks ...passed 00:35:06.256 Test: blockdev writev readv 30 x 1block ...passed 00:35:06.256 Test: blockdev writev readv block ...passed 00:35:06.256 Test: blockdev writev readv size > 128k ...passed 00:35:06.256 Test: blockdev writev readv size > 128k in two iovs ...passed 00:35:06.256 Test: blockdev comparev and writev ...[2024-11-20 08:32:20.224351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:06.256 [2024-11-20 08:32:20.224385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:35:06.256 [2024-11-20 08:32:20.224400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:06.256 [2024-11-20 08:32:20.224409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:35:06.256 [2024-11-20 08:32:20.224705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:06.256 [2024-11-20 08:32:20.224715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:35:06.256 [2024-11-20 08:32:20.224726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:06.256 [2024-11-20 08:32:20.224733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:35:06.257 [2024-11-20 08:32:20.225007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:06.257 [2024-11-20 08:32:20.225018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:35:06.257 [2024-11-20 08:32:20.225035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:06.257 [2024-11-20 08:32:20.225042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:35:06.257 [2024-11-20 08:32:20.225337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:06.257 [2024-11-20 08:32:20.225348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:35:06.257 [2024-11-20 08:32:20.225359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:35:06.257 [2024-11-20 08:32:20.225366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:35:06.257 passed 00:35:06.516 Test: blockdev nvme passthru rw ...passed 00:35:06.516 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:32:20.307585] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:06.516 [2024-11-20 08:32:20.307603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:35:06.516 [2024-11-20 08:32:20.307715] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:06.516 [2024-11-20 08:32:20.307725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:35:06.516 [2024-11-20 08:32:20.307830] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:06.516 [2024-11-20 08:32:20.307839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:35:06.516 [2024-11-20 08:32:20.307947] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:35:06.516 [2024-11-20 08:32:20.307956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:35:06.516 passed 00:35:06.516 Test: blockdev nvme admin passthru ...passed 00:35:06.516 Test: blockdev copy ...passed 00:35:06.516 00:35:06.516 Run Summary: Type Total Ran Passed Failed Inactive 00:35:06.516 suites 1 1 n/a 0 0 00:35:06.516 tests 23 23 23 0 0 00:35:06.516 asserts 152 152 152 0 n/a 00:35:06.516 00:35:06.516 Elapsed time = 1.428 seconds 00:35:06.516 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:06.516 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.516 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:06.516 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.516 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:35:06.516 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:35:06.516 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:35:06.516 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:35:06.516 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:35:06.516 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:35:06.516 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:35:06.516 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:35:06.516 rmmod nvme_tcp 00:35:06.516 rmmod nvme_fabrics 00:35:06.776 rmmod nvme_keyring 00:35:06.776 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:35:06.776 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:35:06.776 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:35:06.776 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 1936168 ']' 00:35:06.776 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 1936168 00:35:06.776 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 1936168 ']' 00:35:06.776 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 1936168 00:35:06.776 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:35:06.776 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:06.776 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1936168 00:35:06.776 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:35:06.776 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:35:06.776 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1936168' 00:35:06.776 killing process with pid 1936168 00:35:06.776 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 1936168 00:35:06.776 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 1936168 00:35:07.036 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:35:07.036 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:35:07.036 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@254 -- # local dev 00:35:07.036 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@257 -- # remove_target_ns 00:35:07.036 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:07.036 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:35:07.036 08:32:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:08.944 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@258 -- # delete_main_bridge 00:35:08.944 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:08.944 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@121 -- # return 0 00:35:08.944 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:08.944 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:35:08.944 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:35:08.944 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:35:08.944 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:35:08.944 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:35:08.944 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:35:08.944 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:35:08.944 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:08.944 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:35:08.944 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:35:08.944 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:35:08.944 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:35:08.944 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:35:08.944 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:35:08.945 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:35:08.945 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:35:08.945 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:35:08.945 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:35:08.945 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@274 -- # iptr 00:35:08.945 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-save 00:35:08.945 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:35:08.945 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@548 -- # iptables-restore 00:35:08.945 00:35:08.945 real 0m10.184s 00:35:08.945 user 0m9.604s 00:35:08.945 sys 0m5.302s 00:35:08.945 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:08.945 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:35:08.945 ************************************ 00:35:08.945 END TEST nvmf_bdevio 00:35:08.945 ************************************ 00:35:08.945 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:35:08.945 00:35:08.945 real 4m35.913s 00:35:08.945 user 9m11.574s 00:35:08.945 sys 1m52.378s 00:35:08.945 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:08.945 08:32:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:35:08.945 ************************************ 00:35:08.945 END TEST nvmf_target_core_interrupt_mode 00:35:08.945 ************************************ 00:35:08.945 08:32:22 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:08.945 08:32:22 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:35:08.945 08:32:22 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:08.945 08:32:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:09.205 ************************************ 00:35:09.205 START TEST nvmf_interrupt 00:35:09.205 ************************************ 00:35:09.205 08:32:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:35:09.205 * Looking for test storage... 00:35:09.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:09.205 08:32:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:09.205 08:32:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:35:09.205 08:32:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:09.205 08:32:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:09.205 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:09.205 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:09.205 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:09.205 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:35:09.205 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:35:09.205 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:35:09.205 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:35:09.205 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:09.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.206 --rc genhtml_branch_coverage=1 00:35:09.206 --rc genhtml_function_coverage=1 00:35:09.206 --rc genhtml_legend=1 00:35:09.206 --rc geninfo_all_blocks=1 00:35:09.206 --rc geninfo_unexecuted_blocks=1 00:35:09.206 00:35:09.206 ' 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:09.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.206 --rc genhtml_branch_coverage=1 00:35:09.206 --rc genhtml_function_coverage=1 00:35:09.206 --rc genhtml_legend=1 00:35:09.206 --rc geninfo_all_blocks=1 00:35:09.206 --rc geninfo_unexecuted_blocks=1 00:35:09.206 00:35:09.206 ' 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:09.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.206 --rc genhtml_branch_coverage=1 00:35:09.206 --rc genhtml_function_coverage=1 00:35:09.206 --rc genhtml_legend=1 00:35:09.206 --rc geninfo_all_blocks=1 00:35:09.206 --rc geninfo_unexecuted_blocks=1 00:35:09.206 00:35:09.206 ' 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:09.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:09.206 --rc genhtml_branch_coverage=1 00:35:09.206 --rc genhtml_function_coverage=1 00:35:09.206 --rc genhtml_legend=1 00:35:09.206 --rc geninfo_all_blocks=1 00:35:09.206 --rc geninfo_unexecuted_blocks=1 00:35:09.206 00:35:09.206 ' 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@50 -- # : 0 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@296 -- # prepare_net_devs 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # local -g is_hw=no 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@260 -- # remove_target_ns 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 14> /dev/null' 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # xtrace_disable 00:35:09.206 08:32:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@131 -- # pci_devs=() 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@131 -- # local -a pci_devs 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@132 -- # pci_net_devs=() 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@133 -- # pci_drivers=() 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@133 -- # local -A pci_drivers 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@135 -- # net_devs=() 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@135 -- # local -ga net_devs 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@136 -- # e810=() 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@136 -- # local -ga e810 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@137 -- # x722=() 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@137 -- # local -ga x722 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@138 -- # mlx=() 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@138 -- # local -ga mlx 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:15.776 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:15.776 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:15.777 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:15.777 Found net devices under 0000:86:00.0: cvl_0_0 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:15.777 Found net devices under 0000:86:00.1: cvl_0_1 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # is_hw=yes 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@247 -- # create_target_ns 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@27 -- # local -gA dev_map 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@28 -- # local -g _dev 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # ips=() 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772161 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:35:15.777 10.0.0.1 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772162 00:35:15.777 08:32:28 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:35:15.777 10.0.0.2 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@38 -- # ping_ips 1 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:35:15.777 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:35:15.778 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:15.778 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.424 ms 00:35:15.778 00:35:15.778 --- 10.0.0.1 ping statistics --- 00:35:15.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:15.778 rtt min/avg/max/mdev = 0.424/0.424/0.424/0.000 ms 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=target0 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:35:15.778 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:15.778 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:35:15.778 00:35:15.778 --- 10.0.0.2 ping statistics --- 00:35:15.778 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:15.778 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair++ )) 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@270 -- # return 0 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=initiator1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # return 1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev= 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@160 -- # return 0 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=target0 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # get_net_dev target1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # local dev=target1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # return 1 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@159 -- # dev= 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@160 -- # return 0 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:35:15.778 ' 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # nvmfpid=1939992 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@329 -- # waitforlisten 1939992 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 1939992 ']' 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:15.778 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:15.778 08:32:29 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:15.778 [2024-11-20 08:32:29.316501] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:35:15.778 [2024-11-20 08:32:29.317394] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:35:15.779 [2024-11-20 08:32:29.317426] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:15.779 [2024-11-20 08:32:29.401942] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:15.779 [2024-11-20 08:32:29.442537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:15.779 [2024-11-20 08:32:29.442573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:15.779 [2024-11-20 08:32:29.442580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:15.779 [2024-11-20 08:32:29.442586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:15.779 [2024-11-20 08:32:29.442591] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:15.779 [2024-11-20 08:32:29.443746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:15.779 [2024-11-20 08:32:29.443746] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:15.779 [2024-11-20 08:32:29.509900] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:35:15.779 [2024-11-20 08:32:29.510446] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:35:15.779 [2024-11-20 08:32:29.510678] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:35:16.348 5000+0 records in 00:35:16.348 5000+0 records out 00:35:16.348 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0172367 s, 594 MB/s 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.348 AIO0 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.348 [2024-11-20 08:32:30.272635] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:16.348 [2024-11-20 08:32:30.312956] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1939992 0 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1939992 0 idle 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1939992 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1939992 -w 256 00:35:16.348 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1939992 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.26 reactor_0' 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1939992 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.26 reactor_0 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 1939992 1 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1939992 1 idle 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1939992 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1939992 -w 256 00:35:16.608 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1939996 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.00 reactor_1' 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1939996 root 20 0 128.2g 45312 33792 S 0.0 0.0 0:00.00 reactor_1 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=1940254 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1939992 0 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1939992 0 busy 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1939992 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1939992 -w 256 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1939992 root 20 0 128.2g 46080 33792 R 66.7 0.0 0:00.36 reactor_0' 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1939992 root 20 0 128.2g 46080 33792 R 66.7 0.0 0:00.36 reactor_0 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=66.7 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=66 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 1939992 1 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 1939992 1 busy 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1939992 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1939992 -w 256 00:35:16.868 08:32:30 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:17.127 08:32:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1939996 root 20 0 128.2g 46080 33792 R 99.9 0.0 0:00.23 reactor_1' 00:35:17.127 08:32:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1939996 root 20 0 128.2g 46080 33792 R 99.9 0.0 0:00.23 reactor_1 00:35:17.127 08:32:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:17.127 08:32:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:17.127 08:32:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:35:17.127 08:32:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:35:17.127 08:32:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:35:17.127 08:32:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:35:17.127 08:32:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:35:17.127 08:32:31 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:17.127 08:32:31 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 1940254 00:35:27.124 Initializing NVMe Controllers 00:35:27.124 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:35:27.124 Controller IO queue size 256, less than required. 00:35:27.124 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:35:27.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:35:27.124 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:35:27.124 Initialization complete. Launching workers. 00:35:27.124 ======================================================== 00:35:27.124 Latency(us) 00:35:27.124 Device Information : IOPS MiB/s Average min max 00:35:27.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16168.70 63.16 15841.12 3041.63 30000.34 00:35:27.124 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16415.70 64.12 15599.21 7357.76 26619.25 00:35:27.124 ======================================================== 00:35:27.124 Total : 32584.39 127.28 15719.25 3041.63 30000.34 00:35:27.124 00:35:27.124 08:32:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:27.124 08:32:40 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1939992 0 00:35:27.124 08:32:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1939992 0 idle 00:35:27.124 08:32:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1939992 00:35:27.124 08:32:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:27.124 08:32:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:27.124 08:32:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:27.124 08:32:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:27.124 08:32:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:27.124 08:32:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:27.124 08:32:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:27.124 08:32:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:27.124 08:32:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:27.124 08:32:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1939992 -w 256 00:35:27.124 08:32:40 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1939992 root 20 0 128.2g 46080 33792 S 6.2 0.0 0:20.25 reactor_0' 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1939992 root 20 0 128.2g 46080 33792 S 6.2 0.0 0:20.25 reactor_0 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 1939992 1 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1939992 1 idle 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1939992 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1939992 -w 256 00:35:27.124 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:27.383 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1939996 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:10.00 reactor_1' 00:35:27.383 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1939996 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:10.00 reactor_1 00:35:27.383 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:27.383 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:27.383 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:27.383 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:27.383 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:27.383 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:27.383 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:27.383 08:32:41 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:27.383 08:32:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:35:27.642 08:32:41 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:35:27.642 08:32:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:35:27.642 08:32:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:35:27.642 08:32:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:35:27.642 08:32:41 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:35:30.177 08:32:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:35:30.177 08:32:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:35:30.177 08:32:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:35:30.177 08:32:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:35:30.177 08:32:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:35:30.177 08:32:43 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:35:30.177 08:32:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:30.177 08:32:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1939992 0 00:35:30.177 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1939992 0 idle 00:35:30.177 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1939992 00:35:30.177 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:35:30.177 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:30.177 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1939992 -w 256 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1939992 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:20.49 reactor_0' 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1939992 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:20.49 reactor_0 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 1939992 1 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 1939992 1 idle 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=1939992 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 1939992 -w 256 00:35:30.178 08:32:43 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:35:30.178 08:32:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='1939996 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:10.09 reactor_1' 00:35:30.178 08:32:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 1939996 root 20 0 128.2g 72192 33792 S 0.0 0.0 0:10.09 reactor_1 00:35:30.178 08:32:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:35:30.178 08:32:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:35:30.178 08:32:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:35:30.178 08:32:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:35:30.178 08:32:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:35:30.178 08:32:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:35:30.178 08:32:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:35:30.178 08:32:44 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:35:30.178 08:32:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:35:30.438 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@335 -- # nvmfcleanup 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@99 -- # sync 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@102 -- # set +e 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@103 -- # for i in {1..20} 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:35:30.438 rmmod nvme_tcp 00:35:30.438 rmmod nvme_fabrics 00:35:30.438 rmmod nvme_keyring 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@106 -- # set -e 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@107 -- # return 0 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # '[' -n 1939992 ']' 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@337 -- # killprocess 1939992 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 1939992 ']' 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 1939992 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1939992 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1939992' 00:35:30.438 killing process with pid 1939992 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 1939992 00:35:30.438 08:32:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 1939992 00:35:30.698 08:32:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:35:30.698 08:32:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # nvmf_fini 00:35:30.698 08:32:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@254 -- # local dev 00:35:30.698 08:32:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@257 -- # remove_target_ns 00:35:30.698 08:32:44 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:30.698 08:32:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 14> /dev/null' 00:35:30.698 08:32:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@258 -- # delete_main_bridge 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@121 -- # return 0 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@41 -- # _dev=0 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@41 -- # dev_map=() 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@274 -- # iptr 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@548 -- # iptables-save 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@548 -- # iptables-restore 00:35:33.235 00:35:33.235 real 0m23.731s 00:35:33.235 user 0m39.225s 00:35:33.235 sys 0m9.265s 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:33.235 08:32:46 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:35:33.235 ************************************ 00:35:33.235 END TEST nvmf_interrupt 00:35:33.235 ************************************ 00:35:33.235 00:35:33.235 real 27m47.022s 00:35:33.235 user 57m11.992s 00:35:33.235 sys 9m24.383s 00:35:33.235 08:32:46 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:33.235 08:32:46 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:33.235 ************************************ 00:35:33.235 END TEST nvmf_tcp 00:35:33.235 ************************************ 00:35:33.235 08:32:46 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:35:33.235 08:32:46 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:33.235 08:32:46 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:33.235 08:32:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:33.235 08:32:46 -- common/autotest_common.sh@10 -- # set +x 00:35:33.235 ************************************ 00:35:33.235 START TEST spdkcli_nvmf_tcp 00:35:33.235 ************************************ 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:35:33.235 * Looking for test storage... 00:35:33.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:35:33.235 08:32:46 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:35:33.235 08:32:47 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:35:33.235 08:32:47 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:33.235 08:32:47 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:35:33.235 08:32:47 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:35:33.235 08:32:47 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:33.235 08:32:47 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:33.235 08:32:47 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:35:33.235 08:32:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:33.235 08:32:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:33.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:33.235 --rc genhtml_branch_coverage=1 00:35:33.235 --rc genhtml_function_coverage=1 00:35:33.235 --rc genhtml_legend=1 00:35:33.235 --rc geninfo_all_blocks=1 00:35:33.235 --rc geninfo_unexecuted_blocks=1 00:35:33.235 00:35:33.235 ' 00:35:33.235 08:32:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:33.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:33.235 --rc genhtml_branch_coverage=1 00:35:33.235 --rc genhtml_function_coverage=1 00:35:33.235 --rc genhtml_legend=1 00:35:33.235 --rc geninfo_all_blocks=1 00:35:33.235 --rc geninfo_unexecuted_blocks=1 00:35:33.235 00:35:33.235 ' 00:35:33.235 08:32:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:33.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:33.235 --rc genhtml_branch_coverage=1 00:35:33.235 --rc genhtml_function_coverage=1 00:35:33.235 --rc genhtml_legend=1 00:35:33.235 --rc geninfo_all_blocks=1 00:35:33.235 --rc geninfo_unexecuted_blocks=1 00:35:33.235 00:35:33.235 ' 00:35:33.235 08:32:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:33.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:33.236 --rc genhtml_branch_coverage=1 00:35:33.236 --rc genhtml_function_coverage=1 00:35:33.236 --rc genhtml_legend=1 00:35:33.236 --rc geninfo_all_blocks=1 00:35:33.236 --rc geninfo_unexecuted_blocks=1 00:35:33.236 00:35:33.236 ' 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@50 -- # : 0 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:35:33.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1942941 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1942941 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 1942941 ']' 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:33.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:33.236 08:32:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:33.236 [2024-11-20 08:32:47.092390] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:35:33.236 [2024-11-20 08:32:47.092437] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1942941 ] 00:35:33.236 [2024-11-20 08:32:47.148319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:35:33.236 [2024-11-20 08:32:47.195222] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:33.236 [2024-11-20 08:32:47.195226] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:33.495 08:32:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:33.495 08:32:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:35:33.495 08:32:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:35:33.495 08:32:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:33.495 08:32:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:33.495 08:32:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:35:33.495 08:32:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:35:33.495 08:32:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:35:33.495 08:32:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:33.495 08:32:47 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:33.495 08:32:47 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:35:33.495 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:35:33.495 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:35:33.495 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:35:33.495 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:35:33.495 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:35:33.495 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:35:33.495 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:33.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:35:33.495 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:35:33.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:33.496 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:33.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:35:33.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:33.496 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:33.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:35:33.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:35:33.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:33.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:35:33.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:33.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:35:33.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:35:33.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:35:33.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:35:33.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:35:33.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:35:33.496 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:35:33.496 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:35:33.496 ' 00:35:36.030 [2024-11-20 08:32:50.011459] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:37.509 [2024-11-20 08:32:51.347939] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:35:40.045 [2024-11-20 08:32:53.831531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:35:42.577 [2024-11-20 08:32:55.994157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:35:43.955 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:35:43.955 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:35:43.955 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:35:43.955 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:35:43.955 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:35:43.955 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:35:43.955 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:35:43.955 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:43.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:35:43.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:35:43.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:43.955 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:43.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:35:43.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:43.955 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:43.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:35:43.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:35:43.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:43.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:35:43.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:43.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:35:43.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:35:43.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:35:43.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:35:43.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:35:43.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:35:43.955 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:35:43.955 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:35:43.955 08:32:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:35:43.955 08:32:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:43.955 08:32:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:43.955 08:32:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:35:43.955 08:32:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:43.955 08:32:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:43.955 08:32:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:35:43.955 08:32:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:35:44.216 08:32:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:35:44.216 08:32:58 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:35:44.216 08:32:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:35:44.216 08:32:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:44.216 08:32:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:44.474 08:32:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:35:44.474 08:32:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:44.474 08:32:58 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:44.474 08:32:58 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:35:44.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:35:44.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:44.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:35:44.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:35:44.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:35:44.474 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:35:44.474 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:35:44.474 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:35:44.474 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:35:44.474 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:35:44.474 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:35:44.474 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:35:44.474 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:35:44.474 ' 00:35:49.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:35:49.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:35:49.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:49.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:35:49.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:35:49.745 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:35:49.745 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:35:49.745 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:35:49.745 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:35:49.745 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:35:49.745 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:35:49.745 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:35:49.745 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:35:49.745 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:35:50.004 08:33:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:35:50.004 08:33:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:50.004 08:33:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:50.005 08:33:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1942941 00:35:50.005 08:33:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1942941 ']' 00:35:50.005 08:33:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1942941 00:35:50.005 08:33:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:35:50.005 08:33:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:50.005 08:33:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1942941 00:35:50.005 08:33:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:50.005 08:33:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:50.005 08:33:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1942941' 00:35:50.005 killing process with pid 1942941 00:35:50.005 08:33:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 1942941 00:35:50.005 08:33:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 1942941 00:35:50.263 08:33:04 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:35:50.263 08:33:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:35:50.263 08:33:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1942941 ']' 00:35:50.263 08:33:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1942941 00:35:50.263 08:33:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 1942941 ']' 00:35:50.263 08:33:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 1942941 00:35:50.263 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1942941) - No such process 00:35:50.263 08:33:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 1942941 is not found' 00:35:50.263 Process with pid 1942941 is not found 00:35:50.263 08:33:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:35:50.263 08:33:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:35:50.263 08:33:04 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:35:50.263 00:35:50.264 real 0m17.296s 00:35:50.264 user 0m38.172s 00:35:50.264 sys 0m0.780s 00:35:50.264 08:33:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:50.264 08:33:04 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:35:50.264 ************************************ 00:35:50.264 END TEST spdkcli_nvmf_tcp 00:35:50.264 ************************************ 00:35:50.264 08:33:04 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:50.264 08:33:04 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:50.264 08:33:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:50.264 08:33:04 -- common/autotest_common.sh@10 -- # set +x 00:35:50.264 ************************************ 00:35:50.264 START TEST nvmf_identify_passthru 00:35:50.264 ************************************ 00:35:50.264 08:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:35:50.264 * Looking for test storage... 00:35:50.264 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:50.264 08:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:50.523 08:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:35:50.523 08:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:50.523 08:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:50.523 08:33:04 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:50.523 08:33:04 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:50.523 08:33:04 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:50.523 08:33:04 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:35:50.523 08:33:04 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:35:50.523 08:33:04 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:35:50.523 08:33:04 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:35:50.523 08:33:04 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:35:50.523 08:33:04 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:35:50.523 08:33:04 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:35:50.523 08:33:04 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:50.523 08:33:04 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:35:50.523 08:33:04 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:35:50.523 08:33:04 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:50.523 08:33:04 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:50.523 08:33:04 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:35:50.523 08:33:04 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:35:50.524 08:33:04 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:50.524 08:33:04 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:35:50.524 08:33:04 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:35:50.524 08:33:04 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:35:50.524 08:33:04 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:35:50.524 08:33:04 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:50.524 08:33:04 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:35:50.524 08:33:04 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:35:50.524 08:33:04 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:50.524 08:33:04 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:50.524 08:33:04 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:35:50.524 08:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:50.524 08:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:50.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.524 --rc genhtml_branch_coverage=1 00:35:50.524 --rc genhtml_function_coverage=1 00:35:50.524 --rc genhtml_legend=1 00:35:50.524 --rc geninfo_all_blocks=1 00:35:50.524 --rc geninfo_unexecuted_blocks=1 00:35:50.524 00:35:50.524 ' 00:35:50.524 08:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:50.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.524 --rc genhtml_branch_coverage=1 00:35:50.524 --rc genhtml_function_coverage=1 00:35:50.524 --rc genhtml_legend=1 00:35:50.524 --rc geninfo_all_blocks=1 00:35:50.524 --rc geninfo_unexecuted_blocks=1 00:35:50.524 00:35:50.524 ' 00:35:50.524 08:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:50.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.524 --rc genhtml_branch_coverage=1 00:35:50.524 --rc genhtml_function_coverage=1 00:35:50.524 --rc genhtml_legend=1 00:35:50.524 --rc geninfo_all_blocks=1 00:35:50.524 --rc geninfo_unexecuted_blocks=1 00:35:50.524 00:35:50.524 ' 00:35:50.524 08:33:04 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:50.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:50.524 --rc genhtml_branch_coverage=1 00:35:50.524 --rc genhtml_function_coverage=1 00:35:50.524 --rc genhtml_legend=1 00:35:50.524 --rc geninfo_all_blocks=1 00:35:50.524 --rc geninfo_unexecuted_blocks=1 00:35:50.524 00:35:50.524 ' 00:35:50.524 08:33:04 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:50.524 08:33:04 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:50.524 08:33:04 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:50.524 08:33:04 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:50.524 08:33:04 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:50.524 08:33:04 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.524 08:33:04 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.524 08:33:04 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.524 08:33:04 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:50.524 08:33:04 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@50 -- # : 0 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:35:50.524 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:50.524 08:33:04 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:50.524 08:33:04 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:35:50.524 08:33:04 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:50.524 08:33:04 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:50.524 08:33:04 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:50.524 08:33:04 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.524 08:33:04 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.524 08:33:04 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.524 08:33:04 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:35:50.524 08:33:04 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:50.524 08:33:04 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@296 -- # prepare_net_devs 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@258 -- # local -g is_hw=no 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@260 -- # remove_target_ns 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:50.524 08:33:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:35:50.524 08:33:04 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:35:50.524 08:33:04 nvmf_identify_passthru -- nvmf/common.sh@125 -- # xtrace_disable 00:35:50.525 08:33:04 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@131 -- # pci_devs=() 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@131 -- # local -a pci_devs 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@132 -- # pci_net_devs=() 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@133 -- # pci_drivers=() 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@133 -- # local -A pci_drivers 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@135 -- # net_devs=() 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@135 -- # local -ga net_devs 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@136 -- # e810=() 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@136 -- # local -ga e810 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@137 -- # x722=() 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@137 -- # local -ga x722 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@138 -- # mlx=() 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@138 -- # local -ga mlx 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:57.097 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:57.097 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:57.098 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:57.098 Found net devices under 0000:86:00.0: cvl_0_0 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:57.098 Found net devices under 0000:86:00.1: cvl_0_1 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@262 -- # is_hw=yes 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@247 -- # create_target_ns 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@27 -- # local -gA dev_map 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@28 -- # local -g _dev 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # ips=() 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772161 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:35:57.098 10.0.0.1 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772162 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:35:57.098 10.0.0.2 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:35:57.098 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@38 -- # ping_ips 1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:35:57.099 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:57.099 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.476 ms 00:35:57.099 00:35:57.099 --- 10.0.0.1 ping statistics --- 00:35:57.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:57.099 rtt min/avg/max/mdev = 0.476/0.476/0.476/0.000 ms 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=target0 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:35:57.099 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:57.099 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:35:57.099 00:35:57.099 --- 10.0.0.2 ping statistics --- 00:35:57.099 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:57.099 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair++ )) 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@270 -- # return 0 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=initiator0 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=initiator1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # return 1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev= 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@160 -- # return 0 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev target0 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=target0 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:57.099 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:57.100 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # get_net_dev target1 00:35:57.100 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # local dev=target1 00:35:57.100 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:35:57.100 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:35:57.100 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # return 1 00:35:57.100 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@159 -- # dev= 00:35:57.100 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@160 -- # return 0 00:35:57.100 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:35:57.100 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:35:57.100 08:33:10 nvmf_identify_passthru -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:35:57.100 ' 00:35:57.100 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:57.100 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:35:57.100 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:35:57.100 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:57.100 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:35:57.100 08:33:10 nvmf_identify_passthru -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:35:57.100 08:33:10 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:35:57.100 08:33:10 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:57.100 08:33:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:57.100 08:33:10 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:35:57.100 08:33:10 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:35:57.100 08:33:10 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:35:57.100 08:33:10 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:35:57.100 08:33:10 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:35:57.100 08:33:10 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:35:57.100 08:33:10 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:35:57.100 08:33:10 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:35:57.100 08:33:10 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:35:57.100 08:33:10 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:35:57.100 08:33:10 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:35:57.100 08:33:10 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:35:57.100 08:33:10 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:35:57.100 08:33:10 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:35:57.100 08:33:10 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:35:57.100 08:33:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:35:57.100 08:33:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:35:57.100 08:33:10 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:36:02.374 08:33:15 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:36:02.374 08:33:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:36:02.374 08:33:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:36:02.374 08:33:15 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:36:06.567 08:33:20 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:36:06.567 08:33:20 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:36:06.567 08:33:20 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:06.567 08:33:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:06.567 08:33:20 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:36:06.567 08:33:20 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:06.567 08:33:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:06.567 08:33:20 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1950961 00:36:06.567 08:33:20 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:36:06.567 08:33:20 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:36:06.567 08:33:20 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1950961 00:36:06.567 08:33:20 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 1950961 ']' 00:36:06.567 08:33:20 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:06.567 08:33:20 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:06.567 08:33:20 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:06.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:06.567 08:33:20 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:06.567 08:33:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:06.567 [2024-11-20 08:33:20.101072] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:36:06.567 [2024-11-20 08:33:20.101120] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:06.567 [2024-11-20 08:33:20.180887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:06.567 [2024-11-20 08:33:20.224449] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:06.567 [2024-11-20 08:33:20.224487] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:06.567 [2024-11-20 08:33:20.224494] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:06.567 [2024-11-20 08:33:20.224500] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:06.567 [2024-11-20 08:33:20.224505] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:06.567 [2024-11-20 08:33:20.226020] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:36:06.567 [2024-11-20 08:33:20.226131] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:36:06.567 [2024-11-20 08:33:20.226259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:06.567 [2024-11-20 08:33:20.226259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:36:07.136 08:33:20 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:07.136 08:33:20 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:36:07.136 08:33:20 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:36:07.136 08:33:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.136 08:33:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:07.136 INFO: Log level set to 20 00:36:07.136 INFO: Requests: 00:36:07.136 { 00:36:07.136 "jsonrpc": "2.0", 00:36:07.136 "method": "nvmf_set_config", 00:36:07.136 "id": 1, 00:36:07.136 "params": { 00:36:07.136 "admin_cmd_passthru": { 00:36:07.136 "identify_ctrlr": true 00:36:07.136 } 00:36:07.136 } 00:36:07.136 } 00:36:07.136 00:36:07.136 INFO: response: 00:36:07.136 { 00:36:07.136 "jsonrpc": "2.0", 00:36:07.136 "id": 1, 00:36:07.136 "result": true 00:36:07.136 } 00:36:07.136 00:36:07.136 08:33:20 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.136 08:33:20 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:36:07.136 08:33:20 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.136 08:33:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:07.136 INFO: Setting log level to 20 00:36:07.136 INFO: Setting log level to 20 00:36:07.136 INFO: Log level set to 20 00:36:07.136 INFO: Log level set to 20 00:36:07.136 INFO: Requests: 00:36:07.136 { 00:36:07.136 "jsonrpc": "2.0", 00:36:07.136 "method": "framework_start_init", 00:36:07.136 "id": 1 00:36:07.136 } 00:36:07.136 00:36:07.136 INFO: Requests: 00:36:07.136 { 00:36:07.136 "jsonrpc": "2.0", 00:36:07.136 "method": "framework_start_init", 00:36:07.136 "id": 1 00:36:07.136 } 00:36:07.136 00:36:07.136 [2024-11-20 08:33:21.030512] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:36:07.136 INFO: response: 00:36:07.136 { 00:36:07.136 "jsonrpc": "2.0", 00:36:07.136 "id": 1, 00:36:07.136 "result": true 00:36:07.136 } 00:36:07.136 00:36:07.136 INFO: response: 00:36:07.136 { 00:36:07.136 "jsonrpc": "2.0", 00:36:07.136 "id": 1, 00:36:07.136 "result": true 00:36:07.136 } 00:36:07.136 00:36:07.136 08:33:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.136 08:33:21 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:07.136 08:33:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.136 08:33:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:07.136 INFO: Setting log level to 40 00:36:07.136 INFO: Setting log level to 40 00:36:07.136 INFO: Setting log level to 40 00:36:07.136 [2024-11-20 08:33:21.043821] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:07.136 08:33:21 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:07.136 08:33:21 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:36:07.136 08:33:21 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:07.136 08:33:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:07.136 08:33:21 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:36:07.136 08:33:21 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:07.136 08:33:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:10.427 Nvme0n1 00:36:10.427 08:33:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.427 08:33:23 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:36:10.427 08:33:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.427 08:33:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:10.427 08:33:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.428 08:33:23 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:36:10.428 08:33:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.428 08:33:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:10.428 08:33:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.428 08:33:23 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:10.428 08:33:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.428 08:33:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:10.428 [2024-11-20 08:33:23.963995] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:10.428 08:33:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.428 08:33:23 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:36:10.428 08:33:23 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.428 08:33:23 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:10.428 [ 00:36:10.428 { 00:36:10.428 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:36:10.428 "subtype": "Discovery", 00:36:10.428 "listen_addresses": [], 00:36:10.428 "allow_any_host": true, 00:36:10.428 "hosts": [] 00:36:10.428 }, 00:36:10.428 { 00:36:10.428 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:36:10.428 "subtype": "NVMe", 00:36:10.428 "listen_addresses": [ 00:36:10.428 { 00:36:10.428 "trtype": "TCP", 00:36:10.428 "adrfam": "IPv4", 00:36:10.428 "traddr": "10.0.0.2", 00:36:10.428 "trsvcid": "4420" 00:36:10.428 } 00:36:10.428 ], 00:36:10.428 "allow_any_host": true, 00:36:10.428 "hosts": [], 00:36:10.428 "serial_number": "SPDK00000000000001", 00:36:10.428 "model_number": "SPDK bdev Controller", 00:36:10.428 "max_namespaces": 1, 00:36:10.428 "min_cntlid": 1, 00:36:10.428 "max_cntlid": 65519, 00:36:10.428 "namespaces": [ 00:36:10.428 { 00:36:10.428 "nsid": 1, 00:36:10.428 "bdev_name": "Nvme0n1", 00:36:10.428 "name": "Nvme0n1", 00:36:10.428 "nguid": "B135379175D043A38D6FBDB5064E62EA", 00:36:10.428 "uuid": "b1353791-75d0-43a3-8d6f-bdb5064e62ea" 00:36:10.428 } 00:36:10.428 ] 00:36:10.428 } 00:36:10.428 ] 00:36:10.428 08:33:23 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.428 08:33:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:10.428 08:33:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:36:10.428 08:33:23 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:36:10.428 08:33:24 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:36:10.428 08:33:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:36:10.428 08:33:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:36:10.428 08:33:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:36:10.428 08:33:24 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:36:10.428 08:33:24 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:36:10.428 08:33:24 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:36:10.428 08:33:24 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:10.428 08:33:24 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.428 08:33:24 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:10.428 08:33:24 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.428 08:33:24 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:36:10.428 08:33:24 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:36:10.428 08:33:24 nvmf_identify_passthru -- nvmf/common.sh@335 -- # nvmfcleanup 00:36:10.428 08:33:24 nvmf_identify_passthru -- nvmf/common.sh@99 -- # sync 00:36:10.428 08:33:24 nvmf_identify_passthru -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:36:10.428 08:33:24 nvmf_identify_passthru -- nvmf/common.sh@102 -- # set +e 00:36:10.428 08:33:24 nvmf_identify_passthru -- nvmf/common.sh@103 -- # for i in {1..20} 00:36:10.428 08:33:24 nvmf_identify_passthru -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:36:10.428 rmmod nvme_tcp 00:36:10.428 rmmod nvme_fabrics 00:36:10.428 rmmod nvme_keyring 00:36:10.687 08:33:24 nvmf_identify_passthru -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:36:10.687 08:33:24 nvmf_identify_passthru -- nvmf/common.sh@106 -- # set -e 00:36:10.687 08:33:24 nvmf_identify_passthru -- nvmf/common.sh@107 -- # return 0 00:36:10.687 08:33:24 nvmf_identify_passthru -- nvmf/common.sh@336 -- # '[' -n 1950961 ']' 00:36:10.687 08:33:24 nvmf_identify_passthru -- nvmf/common.sh@337 -- # killprocess 1950961 00:36:10.687 08:33:24 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 1950961 ']' 00:36:10.687 08:33:24 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 1950961 00:36:10.687 08:33:24 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:36:10.687 08:33:24 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:10.687 08:33:24 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1950961 00:36:10.687 08:33:24 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:10.687 08:33:24 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:10.687 08:33:24 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1950961' 00:36:10.687 killing process with pid 1950961 00:36:10.687 08:33:24 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 1950961 00:36:10.687 08:33:24 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 1950961 00:36:12.592 08:33:26 nvmf_identify_passthru -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:36:12.592 08:33:26 nvmf_identify_passthru -- nvmf/common.sh@342 -- # nvmf_fini 00:36:12.592 08:33:26 nvmf_identify_passthru -- nvmf/setup.sh@254 -- # local dev 00:36:12.592 08:33:26 nvmf_identify_passthru -- nvmf/setup.sh@257 -- # remove_target_ns 00:36:12.592 08:33:26 nvmf_identify_passthru -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:12.592 08:33:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:36:12.592 08:33:26 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@258 -- # delete_main_bridge 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@121 -- # return 0 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@41 -- # _dev=0 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@41 -- # dev_map=() 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/setup.sh@274 -- # iptr 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/common.sh@548 -- # iptables-save 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:36:15.130 08:33:28 nvmf_identify_passthru -- nvmf/common.sh@548 -- # iptables-restore 00:36:15.130 00:36:15.130 real 0m24.390s 00:36:15.130 user 0m32.977s 00:36:15.130 sys 0m6.471s 00:36:15.130 08:33:28 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:15.130 08:33:28 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:36:15.130 ************************************ 00:36:15.130 END TEST nvmf_identify_passthru 00:36:15.130 ************************************ 00:36:15.130 08:33:28 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:15.130 08:33:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:15.130 08:33:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:15.130 08:33:28 -- common/autotest_common.sh@10 -- # set +x 00:36:15.130 ************************************ 00:36:15.130 START TEST nvmf_dif 00:36:15.130 ************************************ 00:36:15.130 08:33:28 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:36:15.130 * Looking for test storage... 00:36:15.130 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:15.130 08:33:28 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:36:15.130 08:33:28 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:36:15.130 08:33:28 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:36:15.130 08:33:28 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:15.130 08:33:28 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:36:15.130 08:33:28 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:15.130 08:33:28 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:36:15.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.130 --rc genhtml_branch_coverage=1 00:36:15.130 --rc genhtml_function_coverage=1 00:36:15.130 --rc genhtml_legend=1 00:36:15.130 --rc geninfo_all_blocks=1 00:36:15.130 --rc geninfo_unexecuted_blocks=1 00:36:15.130 00:36:15.130 ' 00:36:15.130 08:33:28 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:36:15.130 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.130 --rc genhtml_branch_coverage=1 00:36:15.130 --rc genhtml_function_coverage=1 00:36:15.130 --rc genhtml_legend=1 00:36:15.130 --rc geninfo_all_blocks=1 00:36:15.130 --rc geninfo_unexecuted_blocks=1 00:36:15.130 00:36:15.130 ' 00:36:15.131 08:33:28 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:36:15.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.131 --rc genhtml_branch_coverage=1 00:36:15.131 --rc genhtml_function_coverage=1 00:36:15.131 --rc genhtml_legend=1 00:36:15.131 --rc geninfo_all_blocks=1 00:36:15.131 --rc geninfo_unexecuted_blocks=1 00:36:15.131 00:36:15.131 ' 00:36:15.131 08:33:28 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:36:15.131 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.131 --rc genhtml_branch_coverage=1 00:36:15.131 --rc genhtml_function_coverage=1 00:36:15.131 --rc genhtml_legend=1 00:36:15.131 --rc geninfo_all_blocks=1 00:36:15.131 --rc geninfo_unexecuted_blocks=1 00:36:15.131 00:36:15.131 ' 00:36:15.131 08:33:28 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:15.131 08:33:28 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:36:15.131 08:33:28 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.131 08:33:28 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.131 08:33:28 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.131 08:33:28 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.131 08:33:28 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.131 08:33:28 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.131 08:33:28 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:36:15.131 08:33:28 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:36:15.131 08:33:28 nvmf_dif -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:36:15.131 08:33:28 nvmf_dif -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:15.131 08:33:28 nvmf_dif -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@50 -- # : 0 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:36:15.131 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@54 -- # have_pci_nics=0 00:36:15.131 08:33:28 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:36:15.131 08:33:28 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:36:15.131 08:33:28 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:36:15.131 08:33:28 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:36:15.131 08:33:28 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@296 -- # prepare_net_devs 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@258 -- # local -g is_hw=no 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@260 -- # remove_target_ns 00:36:15.131 08:33:28 nvmf_dif -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:36:15.131 08:33:28 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:36:15.131 08:33:28 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:36:15.131 08:33:28 nvmf_dif -- nvmf/common.sh@125 -- # xtrace_disable 00:36:15.131 08:33:28 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@131 -- # pci_devs=() 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@131 -- # local -a pci_devs 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@132 -- # pci_net_devs=() 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@133 -- # pci_drivers=() 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@133 -- # local -A pci_drivers 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@135 -- # net_devs=() 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@135 -- # local -ga net_devs 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@136 -- # e810=() 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@136 -- # local -ga e810 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@137 -- # x722=() 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@137 -- # local -ga x722 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@138 -- # mlx=() 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@138 -- # local -ga mlx 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:36:21.704 Found 0000:86:00.0 (0x8086 - 0x159b) 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:21.704 08:33:34 nvmf_dif -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:36:21.705 Found 0000:86:00.1 (0x8086 - 0x159b) 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:36:21.705 Found net devices under 0000:86:00.0: cvl_0_0 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@234 -- # [[ up == up ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:36:21.705 Found net devices under 0000:86:00.1: cvl_0_1 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@262 -- # is_hw=yes 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@247 -- # create_target_ns 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@27 -- # local -gA dev_map 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@28 -- # local -g _dev 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@44 -- # ips=() 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772161 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:36:21.705 10.0.0.1 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772162 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:36:21.705 10.0.0.2 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:36:21.705 08:33:34 nvmf_dif -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@38 -- # ping_ips 1 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:21.705 08:33:34 nvmf_dif -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:36:21.706 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:21.706 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.449 ms 00:36:21.706 00:36:21.706 --- 10.0.0.1 ping statistics --- 00:36:21.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:21.706 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target0 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:36:21.706 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:21.706 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:36:21.706 00:36:21.706 --- 10.0.0.2 ping statistics --- 00:36:21.706 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:21.706 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair++ )) 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:36:21.706 08:33:34 nvmf_dif -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:21.706 08:33:34 nvmf_dif -- nvmf/common.sh@270 -- # return 0 00:36:21.706 08:33:34 nvmf_dif -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:36:21.706 08:33:34 nvmf_dif -- nvmf/common.sh@299 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:23.611 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:36:23.611 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:36:23.611 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:36:23.611 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:36:23.611 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:36:23.611 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:36:23.611 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:36:23.611 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:36:23.611 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:36:23.611 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:36:23.611 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:36:23.611 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:36:23.611 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:36:23.611 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:36:23.611 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:36:23.611 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:36:23.611 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:36:23.871 08:33:37 nvmf_dif -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator0 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=initiator1 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@100 -- # return 1 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@159 -- # dev= 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@160 -- # return 0 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target0 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target0 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@159 -- # get_net_dev target1 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@98 -- # local dev=target1 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@100 -- # return 1 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@159 -- # dev= 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@160 -- # return 0 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:36:23.871 08:33:37 nvmf_dif -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:36:23.871 ' 00:36:23.871 08:33:37 nvmf_dif -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:23.871 08:33:37 nvmf_dif -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:36:23.871 08:33:37 nvmf_dif -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:36:23.871 08:33:37 nvmf_dif -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:23.871 08:33:37 nvmf_dif -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:36:23.871 08:33:37 nvmf_dif -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:36:23.871 08:33:37 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:36:23.871 08:33:37 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:36:23.871 08:33:37 nvmf_dif -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:36:23.871 08:33:37 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:23.871 08:33:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:23.871 08:33:37 nvmf_dif -- nvmf/common.sh@328 -- # nvmfpid=1956692 00:36:23.871 08:33:37 nvmf_dif -- nvmf/common.sh@329 -- # waitforlisten 1956692 00:36:23.871 08:33:37 nvmf_dif -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:36:23.871 08:33:37 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 1956692 ']' 00:36:23.871 08:33:37 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:23.871 08:33:37 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:23.871 08:33:37 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:23.871 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:23.871 08:33:37 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:23.871 08:33:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:23.871 [2024-11-20 08:33:37.850415] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:36:23.871 [2024-11-20 08:33:37.850464] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:24.131 [2024-11-20 08:33:37.927511] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:24.131 [2024-11-20 08:33:37.968185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:24.131 [2024-11-20 08:33:37.968224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:24.131 [2024-11-20 08:33:37.968232] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:24.131 [2024-11-20 08:33:37.968238] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:24.131 [2024-11-20 08:33:37.968244] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:24.131 [2024-11-20 08:33:37.968841] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.131 08:33:38 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:24.131 08:33:38 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:36:24.131 08:33:38 nvmf_dif -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:36:24.131 08:33:38 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:24.131 08:33:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:24.131 08:33:38 nvmf_dif -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:24.131 08:33:38 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:36:24.131 08:33:38 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:36:24.131 08:33:38 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.131 08:33:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:24.131 [2024-11-20 08:33:38.104046] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:24.131 08:33:38 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.131 08:33:38 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:36:24.131 08:33:38 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:24.131 08:33:38 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:24.131 08:33:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:24.131 ************************************ 00:36:24.131 START TEST fio_dif_1_default 00:36:24.131 ************************************ 00:36:24.131 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:36:24.131 08:33:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:36:24.131 08:33:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:36:24.131 08:33:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:36:24.131 08:33:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:36:24.131 08:33:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:36:24.131 08:33:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:24.131 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.131 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:24.391 bdev_null0 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:24.391 [2024-11-20 08:33:38.180387] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # config=() 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # local subsystem config 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:36:24.391 { 00:36:24.391 "params": { 00:36:24.391 "name": "Nvme$subsystem", 00:36:24.391 "trtype": "$TEST_TRANSPORT", 00:36:24.391 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:24.391 "adrfam": "ipv4", 00:36:24.391 "trsvcid": "$NVMF_PORT", 00:36:24.391 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:24.391 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:24.391 "hdgst": ${hdgst:-false}, 00:36:24.391 "ddgst": ${ddgst:-false} 00:36:24.391 }, 00:36:24.391 "method": "bdev_nvme_attach_controller" 00:36:24.391 } 00:36:24.391 EOF 00:36:24.391 )") 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # cat 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@396 -- # jq . 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@397 -- # IFS=, 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:36:24.391 "params": { 00:36:24.391 "name": "Nvme0", 00:36:24.391 "trtype": "tcp", 00:36:24.391 "traddr": "10.0.0.2", 00:36:24.391 "adrfam": "ipv4", 00:36:24.391 "trsvcid": "4420", 00:36:24.391 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:24.391 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:24.391 "hdgst": false, 00:36:24.391 "ddgst": false 00:36:24.391 }, 00:36:24.391 "method": "bdev_nvme_attach_controller" 00:36:24.391 }' 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:24.391 08:33:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:24.651 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:24.651 fio-3.35 00:36:24.651 Starting 1 thread 00:36:36.862 00:36:36.862 filename0: (groupid=0, jobs=1): err= 0: pid=1957062: Wed Nov 20 08:33:49 2024 00:36:36.862 read: IOPS=192, BW=769KiB/s (788kB/s)(7712KiB/10024msec) 00:36:36.862 slat (nsec): min=5848, max=33987, avg=6215.94, stdev=954.32 00:36:36.862 clat (usec): min=354, max=46000, avg=20777.42, stdev=20390.42 00:36:36.862 lat (usec): min=360, max=46034, avg=20783.63, stdev=20390.38 00:36:36.862 clat percentiles (usec): 00:36:36.862 | 1.00th=[ 371], 5.00th=[ 375], 10.00th=[ 383], 20.00th=[ 392], 00:36:36.862 | 30.00th=[ 400], 40.00th=[ 408], 50.00th=[ 619], 60.00th=[40633], 00:36:36.862 | 70.00th=[40633], 80.00th=[41681], 90.00th=[41681], 95.00th=[41681], 00:36:36.862 | 99.00th=[42730], 99.50th=[42730], 99.90th=[45876], 99.95th=[45876], 00:36:36.862 | 99.99th=[45876] 00:36:36.862 bw ( KiB/s): min= 704, max= 832, per=99.95%, avg=769.60, stdev=26.42, samples=20 00:36:36.862 iops : min= 176, max= 208, avg=192.40, stdev= 6.60, samples=20 00:36:36.862 lat (usec) : 500=49.79%, 750=0.21% 00:36:36.862 lat (msec) : 50=50.00% 00:36:36.862 cpu : usr=92.45%, sys=7.29%, ctx=11, majf=0, minf=0 00:36:36.862 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:36.862 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.862 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:36.862 issued rwts: total=1928,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:36.863 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:36.863 00:36:36.863 Run status group 0 (all jobs): 00:36:36.863 READ: bw=769KiB/s (788kB/s), 769KiB/s-769KiB/s (788kB/s-788kB/s), io=7712KiB (7897kB), run=10024-10024msec 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.863 00:36:36.863 real 0m11.138s 00:36:36.863 user 0m15.686s 00:36:36.863 sys 0m1.051s 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:36:36.863 ************************************ 00:36:36.863 END TEST fio_dif_1_default 00:36:36.863 ************************************ 00:36:36.863 08:33:49 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:36:36.863 08:33:49 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:36.863 08:33:49 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:36.863 08:33:49 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:36.863 ************************************ 00:36:36.863 START TEST fio_dif_1_multi_subsystems 00:36:36.863 ************************************ 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:36.863 bdev_null0 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:36.863 [2024-11-20 08:33:49.390508] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:36.863 bdev_null1 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # config=() 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # local subsystem config 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:36:36.863 { 00:36:36.863 "params": { 00:36:36.863 "name": "Nvme$subsystem", 00:36:36.863 "trtype": "$TEST_TRANSPORT", 00:36:36.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:36.863 "adrfam": "ipv4", 00:36:36.863 "trsvcid": "$NVMF_PORT", 00:36:36.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:36.863 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:36.863 "hdgst": ${hdgst:-false}, 00:36:36.863 "ddgst": ${ddgst:-false} 00:36:36.863 }, 00:36:36.863 "method": "bdev_nvme_attach_controller" 00:36:36.863 } 00:36:36.863 EOF 00:36:36.863 )") 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:36:36.863 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:36:36.863 { 00:36:36.863 "params": { 00:36:36.863 "name": "Nvme$subsystem", 00:36:36.863 "trtype": "$TEST_TRANSPORT", 00:36:36.863 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:36.863 "adrfam": "ipv4", 00:36:36.863 "trsvcid": "$NVMF_PORT", 00:36:36.863 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:36.864 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:36.864 "hdgst": ${hdgst:-false}, 00:36:36.864 "ddgst": ${ddgst:-false} 00:36:36.864 }, 00:36:36.864 "method": "bdev_nvme_attach_controller" 00:36:36.864 } 00:36:36.864 EOF 00:36:36.864 )") 00:36:36.864 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:36:36.864 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:36:36.864 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:36:36.864 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@396 -- # jq . 00:36:36.864 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@397 -- # IFS=, 00:36:36.864 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:36:36.864 "params": { 00:36:36.864 "name": "Nvme0", 00:36:36.864 "trtype": "tcp", 00:36:36.864 "traddr": "10.0.0.2", 00:36:36.864 "adrfam": "ipv4", 00:36:36.864 "trsvcid": "4420", 00:36:36.864 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:36.864 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:36.864 "hdgst": false, 00:36:36.864 "ddgst": false 00:36:36.864 }, 00:36:36.864 "method": "bdev_nvme_attach_controller" 00:36:36.864 },{ 00:36:36.864 "params": { 00:36:36.864 "name": "Nvme1", 00:36:36.864 "trtype": "tcp", 00:36:36.864 "traddr": "10.0.0.2", 00:36:36.864 "adrfam": "ipv4", 00:36:36.864 "trsvcid": "4420", 00:36:36.864 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:36.864 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:36.864 "hdgst": false, 00:36:36.864 "ddgst": false 00:36:36.864 }, 00:36:36.864 "method": "bdev_nvme_attach_controller" 00:36:36.864 }' 00:36:36.864 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:36.864 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:36.864 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:36.864 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:36.864 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:36.864 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:36.864 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:36.864 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:36.864 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:36.864 08:33:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:36.864 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:36.864 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:36:36.864 fio-3.35 00:36:36.864 Starting 2 threads 00:36:46.844 00:36:46.844 filename0: (groupid=0, jobs=1): err= 0: pid=1959037: Wed Nov 20 08:34:00 2024 00:36:46.844 read: IOPS=194, BW=780KiB/s (799kB/s)(7808KiB/10011msec) 00:36:46.844 slat (nsec): min=5807, max=30851, avg=6977.85, stdev=2098.89 00:36:46.844 clat (usec): min=377, max=42578, avg=20492.62, stdev=20503.00 00:36:46.844 lat (usec): min=383, max=42585, avg=20499.60, stdev=20502.38 00:36:46.844 clat percentiles (usec): 00:36:46.844 | 1.00th=[ 400], 5.00th=[ 412], 10.00th=[ 429], 20.00th=[ 478], 00:36:46.844 | 30.00th=[ 486], 40.00th=[ 553], 50.00th=[ 627], 60.00th=[41157], 00:36:46.844 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:36:46.844 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:36:46.844 | 99.99th=[42730] 00:36:46.844 bw ( KiB/s): min= 672, max= 896, per=49.38%, avg=779.20, stdev=52.20, samples=20 00:36:46.844 iops : min= 168, max= 224, avg=194.80, stdev=13.05, samples=20 00:36:46.844 lat (usec) : 500=38.63%, 750=12.19%, 1000=0.20% 00:36:46.844 lat (msec) : 2=0.20%, 50=48.77% 00:36:46.844 cpu : usr=96.79%, sys=2.95%, ctx=9, majf=0, minf=98 00:36:46.844 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:46.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.844 issued rwts: total=1952,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.844 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:46.844 filename1: (groupid=0, jobs=1): err= 0: pid=1959038: Wed Nov 20 08:34:00 2024 00:36:46.844 read: IOPS=199, BW=800KiB/s (819kB/s)(8032KiB/10041msec) 00:36:46.844 slat (nsec): min=5803, max=30165, avg=6987.83, stdev=2091.28 00:36:46.844 clat (usec): min=442, max=42423, avg=19980.26, stdev=20385.02 00:36:46.844 lat (usec): min=448, max=42430, avg=19987.25, stdev=20384.49 00:36:46.844 clat percentiles (usec): 00:36:46.844 | 1.00th=[ 465], 5.00th=[ 494], 10.00th=[ 553], 20.00th=[ 611], 00:36:46.844 | 30.00th=[ 619], 40.00th=[ 668], 50.00th=[ 938], 60.00th=[41157], 00:36:46.844 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41681], 95.00th=[42206], 00:36:46.844 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:36:46.844 | 99.99th=[42206] 00:36:46.844 bw ( KiB/s): min= 672, max= 1152, per=50.78%, avg=801.60, stdev=102.50, samples=20 00:36:46.844 iops : min= 168, max= 288, avg=200.40, stdev=25.63, samples=20 00:36:46.844 lat (usec) : 500=6.32%, 750=41.68%, 1000=3.19% 00:36:46.844 lat (msec) : 2=1.39%, 50=47.41% 00:36:46.844 cpu : usr=97.06%, sys=2.68%, ctx=8, majf=0, minf=184 00:36:46.844 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:46.844 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.844 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:46.844 issued rwts: total=2008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:46.844 latency : target=0, window=0, percentile=100.00%, depth=4 00:36:46.844 00:36:46.844 Run status group 0 (all jobs): 00:36:46.844 READ: bw=1578KiB/s (1615kB/s), 780KiB/s-800KiB/s (799kB/s-819kB/s), io=15.5MiB (16.2MB), run=10011-10041msec 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:46.844 00:36:46.844 real 0m11.456s 00:36:46.844 user 0m26.677s 00:36:46.844 sys 0m0.880s 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:46.844 08:34:00 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:36:46.844 ************************************ 00:36:46.844 END TEST fio_dif_1_multi_subsystems 00:36:46.844 ************************************ 00:36:46.844 08:34:00 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:36:46.844 08:34:00 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:46.844 08:34:00 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:46.844 08:34:00 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:47.104 ************************************ 00:36:47.104 START TEST fio_dif_rand_params 00:36:47.104 ************************************ 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.104 bdev_null0 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:47.104 [2024-11-20 08:34:00.917413] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:36:47.104 { 00:36:47.104 "params": { 00:36:47.104 "name": "Nvme$subsystem", 00:36:47.104 "trtype": "$TEST_TRANSPORT", 00:36:47.104 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:47.104 "adrfam": "ipv4", 00:36:47.104 "trsvcid": "$NVMF_PORT", 00:36:47.104 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:47.104 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:47.104 "hdgst": ${hdgst:-false}, 00:36:47.104 "ddgst": ${ddgst:-false} 00:36:47.104 }, 00:36:47.104 "method": "bdev_nvme_attach_controller" 00:36:47.104 } 00:36:47.104 EOF 00:36:47.104 )") 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:36:47.104 "params": { 00:36:47.104 "name": "Nvme0", 00:36:47.104 "trtype": "tcp", 00:36:47.104 "traddr": "10.0.0.2", 00:36:47.104 "adrfam": "ipv4", 00:36:47.104 "trsvcid": "4420", 00:36:47.104 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:47.104 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:47.104 "hdgst": false, 00:36:47.104 "ddgst": false 00:36:47.104 }, 00:36:47.104 "method": "bdev_nvme_attach_controller" 00:36:47.104 }' 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:47.104 08:34:00 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:47.363 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:47.363 ... 00:36:47.363 fio-3.35 00:36:47.363 Starting 3 threads 00:36:53.930 00:36:53.930 filename0: (groupid=0, jobs=1): err= 0: pid=1960914: Wed Nov 20 08:34:06 2024 00:36:53.930 read: IOPS=312, BW=39.1MiB/s (41.0MB/s)(197MiB/5046msec) 00:36:53.930 slat (nsec): min=6022, max=26485, avg=10274.41, stdev=1940.56 00:36:53.930 clat (usec): min=3347, max=50985, avg=9552.01, stdev=7188.83 00:36:53.930 lat (usec): min=3353, max=51003, avg=9562.28, stdev=7188.69 00:36:53.930 clat percentiles (usec): 00:36:53.930 | 1.00th=[ 4948], 5.00th=[ 6063], 10.00th=[ 6849], 20.00th=[ 7439], 00:36:53.930 | 30.00th=[ 7767], 40.00th=[ 8094], 50.00th=[ 8291], 60.00th=[ 8586], 00:36:53.930 | 70.00th=[ 8848], 80.00th=[ 9241], 90.00th=[ 9896], 95.00th=[10552], 00:36:53.930 | 99.00th=[48497], 99.50th=[49546], 99.90th=[51119], 99.95th=[51119], 00:36:53.930 | 99.99th=[51119] 00:36:53.930 bw ( KiB/s): min=16128, max=47104, per=33.30%, avg=40320.00, stdev=9175.81, samples=10 00:36:53.930 iops : min= 126, max= 368, avg=315.00, stdev=71.69, samples=10 00:36:53.930 lat (msec) : 4=0.57%, 10=91.13%, 20=4.94%, 50=3.11%, 100=0.25% 00:36:53.930 cpu : usr=94.31%, sys=5.39%, ctx=9, majf=0, minf=2 00:36:53.930 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:53.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:53.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:53.930 issued rwts: total=1578,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:53.930 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:53.930 filename0: (groupid=0, jobs=1): err= 0: pid=1960915: Wed Nov 20 08:34:06 2024 00:36:53.930 read: IOPS=318, BW=39.8MiB/s (41.7MB/s)(201MiB/5044msec) 00:36:53.930 slat (nsec): min=5924, max=23201, avg=10494.81, stdev=1912.56 00:36:53.930 clat (usec): min=2953, max=90421, avg=9388.86, stdev=5639.36 00:36:53.930 lat (usec): min=2959, max=90433, avg=9399.36, stdev=5639.40 00:36:53.930 clat percentiles (usec): 00:36:53.930 | 1.00th=[ 3720], 5.00th=[ 5538], 10.00th=[ 6128], 20.00th=[ 7570], 00:36:53.930 | 30.00th=[ 8291], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:36:53.930 | 70.00th=[ 9765], 80.00th=[10159], 90.00th=[10683], 95.00th=[11207], 00:36:53.930 | 99.00th=[46924], 99.50th=[49021], 99.90th=[51119], 99.95th=[90702], 00:36:53.930 | 99.99th=[90702] 00:36:53.930 bw ( KiB/s): min=33792, max=48640, per=33.89%, avg=41036.80, stdev=4096.09, samples=10 00:36:53.930 iops : min= 264, max= 380, avg=320.60, stdev=32.00, samples=10 00:36:53.930 lat (msec) : 4=1.87%, 10=75.64%, 20=20.75%, 50=1.56%, 100=0.19% 00:36:53.930 cpu : usr=94.63%, sys=5.10%, ctx=8, majf=0, minf=0 00:36:53.930 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:53.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:53.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:53.930 issued rwts: total=1605,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:53.930 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:53.930 filename0: (groupid=0, jobs=1): err= 0: pid=1960916: Wed Nov 20 08:34:06 2024 00:36:53.930 read: IOPS=317, BW=39.7MiB/s (41.7MB/s)(199MiB/5003msec) 00:36:53.930 slat (nsec): min=6098, max=24905, avg=10506.30, stdev=1903.00 00:36:53.930 clat (usec): min=2768, max=48728, avg=9425.18, stdev=4412.46 00:36:53.930 lat (usec): min=2775, max=48739, avg=9435.68, stdev=4412.70 00:36:53.930 clat percentiles (usec): 00:36:53.930 | 1.00th=[ 3589], 5.00th=[ 5669], 10.00th=[ 6194], 20.00th=[ 7373], 00:36:53.930 | 30.00th=[ 8291], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9896], 00:36:53.930 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11207], 95.00th=[11731], 00:36:53.930 | 99.00th=[44303], 99.50th=[46924], 99.90th=[48497], 99.95th=[48497], 00:36:53.930 | 99.99th=[48497] 00:36:53.930 bw ( KiB/s): min=37888, max=45312, per=33.58%, avg=40652.80, stdev=2665.35, samples=10 00:36:53.930 iops : min= 296, max= 354, avg=317.60, stdev=20.82, samples=10 00:36:53.930 lat (msec) : 4=2.64%, 10=61.19%, 20=35.03%, 50=1.13% 00:36:53.930 cpu : usr=94.44%, sys=5.26%, ctx=9, majf=0, minf=0 00:36:53.930 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:53.930 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:53.930 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:53.930 issued rwts: total=1590,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:53.930 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:53.930 00:36:53.930 Run status group 0 (all jobs): 00:36:53.930 READ: bw=118MiB/s (124MB/s), 39.1MiB/s-39.8MiB/s (41.0MB/s-41.7MB/s), io=597MiB (626MB), run=5003-5046msec 00:36:53.930 08:34:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:36:53.930 08:34:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:53.930 08:34:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:53.930 08:34:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:53.930 08:34:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:53.930 08:34:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:53.930 08:34:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.930 08:34:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.930 08:34:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.930 08:34:06 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:53.930 08:34:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.930 08:34:06 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.930 bdev_null0 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.930 [2024-11-20 08:34:07.034103] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.930 bdev_null1 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.930 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.931 bdev_null2 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:36:53.931 { 00:36:53.931 "params": { 00:36:53.931 "name": "Nvme$subsystem", 00:36:53.931 "trtype": "$TEST_TRANSPORT", 00:36:53.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:53.931 "adrfam": "ipv4", 00:36:53.931 "trsvcid": "$NVMF_PORT", 00:36:53.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:53.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:53.931 "hdgst": ${hdgst:-false}, 00:36:53.931 "ddgst": ${ddgst:-false} 00:36:53.931 }, 00:36:53.931 "method": "bdev_nvme_attach_controller" 00:36:53.931 } 00:36:53.931 EOF 00:36:53.931 )") 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:36:53.931 { 00:36:53.931 "params": { 00:36:53.931 "name": "Nvme$subsystem", 00:36:53.931 "trtype": "$TEST_TRANSPORT", 00:36:53.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:53.931 "adrfam": "ipv4", 00:36:53.931 "trsvcid": "$NVMF_PORT", 00:36:53.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:53.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:53.931 "hdgst": ${hdgst:-false}, 00:36:53.931 "ddgst": ${ddgst:-false} 00:36:53.931 }, 00:36:53.931 "method": "bdev_nvme_attach_controller" 00:36:53.931 } 00:36:53.931 EOF 00:36:53.931 )") 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:36:53.931 { 00:36:53.931 "params": { 00:36:53.931 "name": "Nvme$subsystem", 00:36:53.931 "trtype": "$TEST_TRANSPORT", 00:36:53.931 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:53.931 "adrfam": "ipv4", 00:36:53.931 "trsvcid": "$NVMF_PORT", 00:36:53.931 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:53.931 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:53.931 "hdgst": ${hdgst:-false}, 00:36:53.931 "ddgst": ${ddgst:-false} 00:36:53.931 }, 00:36:53.931 "method": "bdev_nvme_attach_controller" 00:36:53.931 } 00:36:53.931 EOF 00:36:53.931 )") 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:36:53.931 "params": { 00:36:53.931 "name": "Nvme0", 00:36:53.931 "trtype": "tcp", 00:36:53.931 "traddr": "10.0.0.2", 00:36:53.931 "adrfam": "ipv4", 00:36:53.931 "trsvcid": "4420", 00:36:53.931 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:53.931 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:53.931 "hdgst": false, 00:36:53.931 "ddgst": false 00:36:53.931 }, 00:36:53.931 "method": "bdev_nvme_attach_controller" 00:36:53.931 },{ 00:36:53.931 "params": { 00:36:53.931 "name": "Nvme1", 00:36:53.931 "trtype": "tcp", 00:36:53.931 "traddr": "10.0.0.2", 00:36:53.931 "adrfam": "ipv4", 00:36:53.931 "trsvcid": "4420", 00:36:53.931 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:53.931 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:53.931 "hdgst": false, 00:36:53.931 "ddgst": false 00:36:53.931 }, 00:36:53.931 "method": "bdev_nvme_attach_controller" 00:36:53.931 },{ 00:36:53.931 "params": { 00:36:53.931 "name": "Nvme2", 00:36:53.931 "trtype": "tcp", 00:36:53.931 "traddr": "10.0.0.2", 00:36:53.931 "adrfam": "ipv4", 00:36:53.931 "trsvcid": "4420", 00:36:53.931 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:36:53.931 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:36:53.931 "hdgst": false, 00:36:53.931 "ddgst": false 00:36:53.931 }, 00:36:53.931 "method": "bdev_nvme_attach_controller" 00:36:53.931 }' 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:53.931 08:34:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:53.931 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:53.931 ... 00:36:53.931 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:53.931 ... 00:36:53.931 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:36:53.931 ... 00:36:53.931 fio-3.35 00:36:53.931 Starting 24 threads 00:37:06.234 00:37:06.234 filename0: (groupid=0, jobs=1): err= 0: pid=1962045: Wed Nov 20 08:34:18 2024 00:37:06.234 read: IOPS=608, BW=2433KiB/s (2491kB/s)(23.8MiB/10023msec) 00:37:06.234 slat (usec): min=7, max=258, avg=43.08, stdev=20.63 00:37:06.235 clat (usec): min=8441, max=31631, avg=25880.42, stdev=2053.59 00:37:06.235 lat (usec): min=8639, max=31681, avg=25923.50, stdev=2053.52 00:37:06.235 clat percentiles (usec): 00:37:06.235 | 1.00th=[19530], 5.00th=[24249], 10.00th=[24249], 20.00th=[24511], 00:37:06.235 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:37:06.235 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28967], 95.00th=[29492], 00:37:06.235 | 99.00th=[30016], 99.50th=[30278], 99.90th=[31327], 99.95th=[31589], 00:37:06.235 | 99.99th=[31589] 00:37:06.235 bw ( KiB/s): min= 2299, max= 2560, per=4.18%, avg=2431.45, stdev=101.67, samples=20 00:37:06.235 iops : min= 574, max= 640, avg=607.80, stdev=25.44, samples=20 00:37:06.235 lat (msec) : 10=0.20%, 20=0.85%, 50=98.95% 00:37:06.235 cpu : usr=97.95%, sys=1.27%, ctx=151, majf=0, minf=9 00:37:06.235 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:06.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.235 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.235 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.235 filename0: (groupid=0, jobs=1): err= 0: pid=1962046: Wed Nov 20 08:34:18 2024 00:37:06.235 read: IOPS=605, BW=2424KiB/s (2482kB/s)(23.7MiB/10008msec) 00:37:06.235 slat (usec): min=8, max=106, avg=52.12, stdev=23.57 00:37:06.235 clat (usec): min=9296, max=50115, avg=25915.55, stdev=2352.51 00:37:06.235 lat (usec): min=9324, max=50154, avg=25967.67, stdev=2354.83 00:37:06.235 clat percentiles (usec): 00:37:06.235 | 1.00th=[23725], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:37:06.235 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[25822], 00:37:06.235 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28705], 95.00th=[29492], 00:37:06.235 | 99.00th=[30016], 99.50th=[30278], 99.90th=[50070], 99.95th=[50070], 00:37:06.235 | 99.99th=[50070] 00:37:06.235 bw ( KiB/s): min= 2176, max= 2688, per=4.14%, avg=2411.42, stdev=135.91, samples=19 00:37:06.235 iops : min= 544, max= 672, avg=602.79, stdev=33.95, samples=19 00:37:06.235 lat (msec) : 10=0.26%, 20=0.26%, 50=99.39%, 100=0.08% 00:37:06.235 cpu : usr=97.91%, sys=1.29%, ctx=272, majf=0, minf=9 00:37:06.235 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:37:06.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.235 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.235 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.235 filename0: (groupid=0, jobs=1): err= 0: pid=1962047: Wed Nov 20 08:34:18 2024 00:37:06.235 read: IOPS=605, BW=2423KiB/s (2481kB/s)(23.7MiB/10010msec) 00:37:06.235 slat (usec): min=4, max=106, avg=52.96, stdev=22.58 00:37:06.235 clat (usec): min=9165, max=51735, avg=25924.65, stdev=2381.87 00:37:06.235 lat (usec): min=9174, max=51752, avg=25977.61, stdev=2383.10 00:37:06.235 clat percentiles (usec): 00:37:06.235 | 1.00th=[23725], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:37:06.235 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[25822], 00:37:06.235 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28705], 95.00th=[29492], 00:37:06.235 | 99.00th=[30016], 99.50th=[30278], 99.90th=[51643], 99.95th=[51643], 00:37:06.235 | 99.99th=[51643] 00:37:06.235 bw ( KiB/s): min= 2176, max= 2688, per=4.14%, avg=2411.21, stdev=136.30, samples=19 00:37:06.235 iops : min= 544, max= 672, avg=602.74, stdev=34.04, samples=19 00:37:06.235 lat (msec) : 10=0.26%, 20=0.26%, 50=99.21%, 100=0.26% 00:37:06.235 cpu : usr=98.07%, sys=1.18%, ctx=146, majf=0, minf=9 00:37:06.235 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:06.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.235 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.235 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.235 filename0: (groupid=0, jobs=1): err= 0: pid=1962048: Wed Nov 20 08:34:18 2024 00:37:06.235 read: IOPS=605, BW=2423KiB/s (2481kB/s)(23.7MiB/10012msec) 00:37:06.235 slat (nsec): min=4807, max=83977, avg=28102.32, stdev=18136.03 00:37:06.235 clat (usec): min=17802, max=32142, avg=26128.51, stdev=1687.30 00:37:06.235 lat (usec): min=17817, max=32157, avg=26156.62, stdev=1690.34 00:37:06.235 clat percentiles (usec): 00:37:06.235 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24511], 20.00th=[24773], 00:37:06.235 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:37:06.235 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28967], 95.00th=[29754], 00:37:06.235 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:37:06.235 | 99.99th=[32113] 00:37:06.235 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2418.16, stdev=102.87, samples=19 00:37:06.235 iops : min= 576, max= 640, avg=604.47, stdev=25.68, samples=19 00:37:06.235 lat (msec) : 20=0.03%, 50=99.97% 00:37:06.235 cpu : usr=98.65%, sys=0.84%, ctx=69, majf=0, minf=9 00:37:06.235 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.235 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.235 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.235 filename0: (groupid=0, jobs=1): err= 0: pid=1962049: Wed Nov 20 08:34:18 2024 00:37:06.235 read: IOPS=605, BW=2424KiB/s (2482kB/s)(23.7MiB/10007msec) 00:37:06.235 slat (nsec): min=6179, max=99783, avg=48353.78, stdev=19553.80 00:37:06.235 clat (usec): min=13340, max=36533, avg=25990.35, stdev=1822.38 00:37:06.235 lat (usec): min=13374, max=36550, avg=26038.70, stdev=1824.20 00:37:06.235 clat percentiles (usec): 00:37:06.235 | 1.00th=[23725], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:37:06.235 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:37:06.235 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28705], 95.00th=[29492], 00:37:06.235 | 99.00th=[30016], 99.50th=[30278], 99.90th=[31851], 99.95th=[31851], 00:37:06.235 | 99.99th=[36439] 00:37:06.235 bw ( KiB/s): min= 2176, max= 2560, per=4.15%, avg=2417.95, stdev=111.59, samples=19 00:37:06.235 iops : min= 544, max= 640, avg=604.42, stdev=27.86, samples=19 00:37:06.235 lat (msec) : 20=0.26%, 50=99.74% 00:37:06.235 cpu : usr=98.55%, sys=0.99%, ctx=54, majf=0, minf=9 00:37:06.235 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.235 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.235 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.235 filename0: (groupid=0, jobs=1): err= 0: pid=1962050: Wed Nov 20 08:34:18 2024 00:37:06.235 read: IOPS=608, BW=2433KiB/s (2491kB/s)(23.8MiB/10023msec) 00:37:06.235 slat (nsec): min=6610, max=84970, avg=31361.19, stdev=15511.75 00:37:06.235 clat (usec): min=10238, max=31724, avg=26073.27, stdev=2046.86 00:37:06.235 lat (usec): min=10256, max=31764, avg=26104.63, stdev=2046.76 00:37:06.235 clat percentiles (usec): 00:37:06.235 | 1.00th=[19792], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:37:06.235 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[26346], 00:37:06.235 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28967], 95.00th=[29754], 00:37:06.235 | 99.00th=[30278], 99.50th=[30278], 99.90th=[31589], 99.95th=[31589], 00:37:06.235 | 99.99th=[31851] 00:37:06.235 bw ( KiB/s): min= 2299, max= 2560, per=4.18%, avg=2431.45, stdev=101.67, samples=20 00:37:06.235 iops : min= 574, max= 640, avg=607.80, stdev=25.44, samples=20 00:37:06.235 lat (msec) : 20=1.05%, 50=98.95% 00:37:06.235 cpu : usr=98.64%, sys=0.98%, ctx=42, majf=0, minf=9 00:37:06.235 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:06.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.235 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.235 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.235 filename0: (groupid=0, jobs=1): err= 0: pid=1962051: Wed Nov 20 08:34:18 2024 00:37:06.235 read: IOPS=608, BW=2433KiB/s (2491kB/s)(23.8MiB/10023msec) 00:37:06.235 slat (nsec): min=6235, max=92157, avg=22602.91, stdev=15078.11 00:37:06.235 clat (usec): min=8085, max=31642, avg=26137.59, stdev=2058.12 00:37:06.235 lat (usec): min=8097, max=31680, avg=26160.19, stdev=2056.41 00:37:06.235 clat percentiles (usec): 00:37:06.235 | 1.00th=[19792], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:37:06.235 | 30.00th=[24773], 40.00th=[25560], 50.00th=[25822], 60.00th=[26346], 00:37:06.235 | 70.00th=[26608], 80.00th=[27919], 90.00th=[29230], 95.00th=[30016], 00:37:06.235 | 99.00th=[30278], 99.50th=[30278], 99.90th=[31589], 99.95th=[31589], 00:37:06.235 | 99.99th=[31589] 00:37:06.235 bw ( KiB/s): min= 2299, max= 2560, per=4.18%, avg=2431.45, stdev=101.67, samples=20 00:37:06.235 iops : min= 574, max= 640, avg=607.80, stdev=25.44, samples=20 00:37:06.235 lat (msec) : 10=0.03%, 20=0.98%, 50=98.98% 00:37:06.235 cpu : usr=98.81%, sys=0.81%, ctx=17, majf=0, minf=9 00:37:06.235 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.235 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.235 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.235 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.235 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.235 filename0: (groupid=0, jobs=1): err= 0: pid=1962052: Wed Nov 20 08:34:18 2024 00:37:06.235 read: IOPS=608, BW=2434KiB/s (2492kB/s)(23.8MiB/10019msec) 00:37:06.235 slat (usec): min=7, max=104, avg=44.45, stdev=24.72 00:37:06.235 clat (usec): min=11533, max=31696, avg=25966.53, stdev=2099.96 00:37:06.235 lat (usec): min=11548, max=31729, avg=26010.98, stdev=2099.61 00:37:06.235 clat percentiles (usec): 00:37:06.235 | 1.00th=[18482], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:37:06.235 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:37:06.235 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28967], 95.00th=[29754], 00:37:06.235 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30540], 99.95th=[30802], 00:37:06.235 | 99.99th=[31589] 00:37:06.235 bw ( KiB/s): min= 2299, max= 2688, per=4.18%, avg=2431.45, stdev=130.98, samples=20 00:37:06.236 iops : min= 574, max= 672, avg=607.80, stdev=32.73, samples=20 00:37:06.236 lat (msec) : 20=1.05%, 50=98.95% 00:37:06.236 cpu : usr=98.24%, sys=1.17%, ctx=77, majf=0, minf=9 00:37:06.236 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.236 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.236 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.236 filename1: (groupid=0, jobs=1): err= 0: pid=1962053: Wed Nov 20 08:34:18 2024 00:37:06.236 read: IOPS=608, BW=2436KiB/s (2494kB/s)(23.8MiB/10010msec) 00:37:06.236 slat (nsec): min=7331, max=84722, avg=21664.69, stdev=17940.93 00:37:06.236 clat (usec): min=11503, max=32032, avg=26076.66, stdev=2068.99 00:37:06.236 lat (usec): min=11531, max=32041, avg=26098.33, stdev=2071.01 00:37:06.236 clat percentiles (usec): 00:37:06.236 | 1.00th=[18482], 5.00th=[24249], 10.00th=[24773], 20.00th=[24773], 00:37:06.236 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25822], 60.00th=[26084], 00:37:06.236 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28967], 95.00th=[29754], 00:37:06.236 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30278], 99.95th=[30278], 00:37:06.236 | 99.99th=[32113] 00:37:06.236 bw ( KiB/s): min= 2299, max= 2688, per=4.18%, avg=2431.42, stdev=134.57, samples=19 00:37:06.236 iops : min= 574, max= 672, avg=607.79, stdev=33.63, samples=19 00:37:06.236 lat (msec) : 20=1.05%, 50=98.95% 00:37:06.236 cpu : usr=98.60%, sys=1.00%, ctx=32, majf=0, minf=9 00:37:06.236 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.236 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.236 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.236 filename1: (groupid=0, jobs=1): err= 0: pid=1962054: Wed Nov 20 08:34:18 2024 00:37:06.236 read: IOPS=608, BW=2433KiB/s (2491kB/s)(23.8MiB/10023msec) 00:37:06.236 slat (nsec): min=6340, max=87528, avg=36968.22, stdev=15679.96 00:37:06.236 clat (usec): min=10414, max=31674, avg=25971.33, stdev=2039.45 00:37:06.236 lat (usec): min=10448, max=31701, avg=26008.29, stdev=2040.97 00:37:06.236 clat percentiles (usec): 00:37:06.236 | 1.00th=[19792], 5.00th=[24249], 10.00th=[24249], 20.00th=[24511], 00:37:06.236 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[26084], 00:37:06.236 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28967], 95.00th=[29754], 00:37:06.236 | 99.00th=[30016], 99.50th=[30278], 99.90th=[31327], 99.95th=[31589], 00:37:06.236 | 99.99th=[31589] 00:37:06.236 bw ( KiB/s): min= 2299, max= 2560, per=4.18%, avg=2431.45, stdev=101.67, samples=20 00:37:06.236 iops : min= 574, max= 640, avg=607.80, stdev=25.44, samples=20 00:37:06.236 lat (msec) : 20=1.02%, 50=98.98% 00:37:06.236 cpu : usr=98.74%, sys=0.87%, ctx=25, majf=0, minf=9 00:37:06.236 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.236 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.236 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.236 filename1: (groupid=0, jobs=1): err= 0: pid=1962055: Wed Nov 20 08:34:18 2024 00:37:06.236 read: IOPS=605, BW=2424KiB/s (2482kB/s)(23.7MiB/10007msec) 00:37:06.236 slat (usec): min=6, max=106, avg=52.14, stdev=22.39 00:37:06.236 clat (usec): min=12931, max=31933, avg=25978.70, stdev=1818.80 00:37:06.236 lat (usec): min=12957, max=31949, avg=26030.84, stdev=1819.49 00:37:06.236 clat percentiles (usec): 00:37:06.236 | 1.00th=[23725], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:37:06.236 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:37:06.236 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28705], 95.00th=[29492], 00:37:06.236 | 99.00th=[30016], 99.50th=[30278], 99.90th=[31851], 99.95th=[31851], 00:37:06.236 | 99.99th=[31851] 00:37:06.236 bw ( KiB/s): min= 2176, max= 2560, per=4.15%, avg=2417.95, stdev=111.59, samples=19 00:37:06.236 iops : min= 544, max= 640, avg=604.42, stdev=27.86, samples=19 00:37:06.236 lat (msec) : 20=0.26%, 50=99.74% 00:37:06.236 cpu : usr=98.57%, sys=0.92%, ctx=106, majf=0, minf=9 00:37:06.236 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:06.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.236 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.236 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.236 filename1: (groupid=0, jobs=1): err= 0: pid=1962056: Wed Nov 20 08:34:18 2024 00:37:06.236 read: IOPS=608, BW=2433KiB/s (2491kB/s)(23.8MiB/10024msec) 00:37:06.236 slat (nsec): min=8272, max=93856, avg=43026.83, stdev=19298.51 00:37:06.236 clat (usec): min=10277, max=31537, avg=25894.09, stdev=2044.28 00:37:06.236 lat (usec): min=10295, max=31589, avg=25937.12, stdev=2046.52 00:37:06.236 clat percentiles (usec): 00:37:06.236 | 1.00th=[19792], 5.00th=[24249], 10.00th=[24249], 20.00th=[24511], 00:37:06.236 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:37:06.236 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28967], 95.00th=[29492], 00:37:06.236 | 99.00th=[30016], 99.50th=[30278], 99.90th=[31327], 99.95th=[31327], 00:37:06.236 | 99.99th=[31589] 00:37:06.236 bw ( KiB/s): min= 2299, max= 2560, per=4.18%, avg=2431.45, stdev=101.67, samples=20 00:37:06.236 iops : min= 574, max= 640, avg=607.80, stdev=25.44, samples=20 00:37:06.236 lat (msec) : 20=1.05%, 50=98.95% 00:37:06.236 cpu : usr=98.32%, sys=1.14%, ctx=128, majf=0, minf=9 00:37:06.236 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:06.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.236 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.236 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.236 filename1: (groupid=0, jobs=1): err= 0: pid=1962057: Wed Nov 20 08:34:18 2024 00:37:06.236 read: IOPS=605, BW=2424KiB/s (2482kB/s)(23.7MiB/10007msec) 00:37:06.236 slat (nsec): min=3855, max=94094, avg=41573.82, stdev=20460.43 00:37:06.236 clat (usec): min=19469, max=31625, avg=25985.08, stdev=1733.44 00:37:06.236 lat (usec): min=19477, max=31667, avg=26026.66, stdev=1735.59 00:37:06.236 clat percentiles (usec): 00:37:06.236 | 1.00th=[23725], 5.00th=[24249], 10.00th=[24249], 20.00th=[24511], 00:37:06.236 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:37:06.236 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28967], 95.00th=[29492], 00:37:06.236 | 99.00th=[30016], 99.50th=[30278], 99.90th=[31327], 99.95th=[31589], 00:37:06.236 | 99.99th=[31589] 00:37:06.236 bw ( KiB/s): min= 2176, max= 2688, per=4.15%, avg=2417.68, stdev=140.96, samples=19 00:37:06.236 iops : min= 544, max= 672, avg=604.32, stdev=35.28, samples=19 00:37:06.236 lat (msec) : 20=0.26%, 50=99.74% 00:37:06.236 cpu : usr=98.36%, sys=1.01%, ctx=125, majf=0, minf=9 00:37:06.236 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:06.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.236 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.236 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.236 filename1: (groupid=0, jobs=1): err= 0: pid=1962058: Wed Nov 20 08:34:18 2024 00:37:06.236 read: IOPS=609, BW=2437KiB/s (2496kB/s)(23.8MiB/10005msec) 00:37:06.236 slat (nsec): min=6220, max=84218, avg=19634.16, stdev=13272.72 00:37:06.236 clat (usec): min=5007, max=37086, avg=26105.80, stdev=2266.08 00:37:06.236 lat (usec): min=5016, max=37124, avg=26125.44, stdev=2267.88 00:37:06.236 clat percentiles (usec): 00:37:06.236 | 1.00th=[17957], 5.00th=[24249], 10.00th=[24773], 20.00th=[24773], 00:37:06.236 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25822], 60.00th=[26346], 00:37:06.236 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28967], 95.00th=[29754], 00:37:06.236 | 99.00th=[30278], 99.50th=[30278], 99.90th=[30540], 99.95th=[30540], 00:37:06.236 | 99.99th=[36963] 00:37:06.236 bw ( KiB/s): min= 2299, max= 2688, per=4.19%, avg=2437.84, stdev=123.87, samples=19 00:37:06.236 iops : min= 574, max= 672, avg=609.37, stdev=30.96, samples=19 00:37:06.236 lat (msec) : 10=0.31%, 20=0.74%, 50=98.95% 00:37:06.236 cpu : usr=98.62%, sys=0.96%, ctx=31, majf=0, minf=9 00:37:06.236 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.236 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.236 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.236 filename1: (groupid=0, jobs=1): err= 0: pid=1962059: Wed Nov 20 08:34:18 2024 00:37:06.236 read: IOPS=605, BW=2423KiB/s (2482kB/s)(23.7MiB/10009msec) 00:37:06.236 slat (usec): min=4, max=101, avg=49.06, stdev=19.70 00:37:06.236 clat (usec): min=9473, max=50488, avg=25976.45, stdev=2341.41 00:37:06.236 lat (usec): min=9504, max=50502, avg=26025.51, stdev=2342.40 00:37:06.236 clat percentiles (usec): 00:37:06.236 | 1.00th=[23725], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:37:06.236 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:37:06.236 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28705], 95.00th=[29492], 00:37:06.236 | 99.00th=[30016], 99.50th=[30278], 99.90th=[50594], 99.95th=[50594], 00:37:06.236 | 99.99th=[50594] 00:37:06.236 bw ( KiB/s): min= 2176, max= 2688, per=4.14%, avg=2411.42, stdev=135.91, samples=19 00:37:06.236 iops : min= 544, max= 672, avg=602.79, stdev=33.95, samples=19 00:37:06.236 lat (msec) : 10=0.26%, 20=0.26%, 50=99.21%, 100=0.26% 00:37:06.236 cpu : usr=98.77%, sys=0.84%, ctx=37, majf=0, minf=9 00:37:06.236 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:06.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.236 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.236 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.236 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.236 filename1: (groupid=0, jobs=1): err= 0: pid=1962060: Wed Nov 20 08:34:18 2024 00:37:06.236 read: IOPS=605, BW=2422KiB/s (2480kB/s)(23.7MiB/10014msec) 00:37:06.236 slat (nsec): min=6545, max=84210, avg=24309.76, stdev=12429.88 00:37:06.237 clat (usec): min=17528, max=36926, avg=26201.78, stdev=1748.06 00:37:06.237 lat (usec): min=17538, max=36942, avg=26226.09, stdev=1750.21 00:37:06.237 clat percentiles (usec): 00:37:06.237 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24511], 20.00th=[24773], 00:37:06.237 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[26346], 00:37:06.237 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28967], 95.00th=[29754], 00:37:06.237 | 99.00th=[30278], 99.50th=[30540], 99.90th=[31851], 99.95th=[33817], 00:37:06.237 | 99.99th=[36963] 00:37:06.237 bw ( KiB/s): min= 2304, max= 2560, per=4.15%, avg=2417.95, stdev=103.11, samples=19 00:37:06.237 iops : min= 576, max= 640, avg=604.42, stdev=25.74, samples=19 00:37:06.237 lat (msec) : 20=0.13%, 50=99.87% 00:37:06.237 cpu : usr=98.60%, sys=0.96%, ctx=42, majf=0, minf=9 00:37:06.237 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:37:06.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.237 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.237 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.237 filename2: (groupid=0, jobs=1): err= 0: pid=1962061: Wed Nov 20 08:34:18 2024 00:37:06.237 read: IOPS=608, BW=2433KiB/s (2491kB/s)(23.8MiB/10023msec) 00:37:06.237 slat (nsec): min=8807, max=93656, avg=40061.90, stdev=18169.36 00:37:06.237 clat (usec): min=10370, max=31636, avg=25944.51, stdev=2053.02 00:37:06.237 lat (usec): min=10390, max=31653, avg=25984.57, stdev=2054.00 00:37:06.237 clat percentiles (usec): 00:37:06.237 | 1.00th=[19792], 5.00th=[24249], 10.00th=[24249], 20.00th=[24511], 00:37:06.237 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[26084], 00:37:06.237 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28967], 95.00th=[29754], 00:37:06.237 | 99.00th=[30278], 99.50th=[30278], 99.90th=[31327], 99.95th=[31589], 00:37:06.237 | 99.99th=[31589] 00:37:06.237 bw ( KiB/s): min= 2299, max= 2560, per=4.18%, avg=2431.45, stdev=101.67, samples=20 00:37:06.237 iops : min= 574, max= 640, avg=607.80, stdev=25.44, samples=20 00:37:06.237 lat (msec) : 20=1.02%, 50=98.98% 00:37:06.237 cpu : usr=98.56%, sys=1.08%, ctx=16, majf=0, minf=9 00:37:06.237 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.237 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.237 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.237 filename2: (groupid=0, jobs=1): err= 0: pid=1962062: Wed Nov 20 08:34:18 2024 00:37:06.237 read: IOPS=606, BW=2427KiB/s (2485kB/s)(23.7MiB/10011msec) 00:37:06.237 slat (usec): min=3, max=136, avg=42.52, stdev=26.00 00:37:06.237 clat (usec): min=11938, max=37017, avg=25966.41, stdev=2265.90 00:37:06.237 lat (usec): min=11947, max=37078, avg=26008.93, stdev=2266.91 00:37:06.237 clat percentiles (usec): 00:37:06.237 | 1.00th=[20841], 5.00th=[23725], 10.00th=[23987], 20.00th=[24249], 00:37:06.237 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25560], 60.00th=[26084], 00:37:06.237 | 70.00th=[26608], 80.00th=[27919], 90.00th=[29230], 95.00th=[29754], 00:37:06.237 | 99.00th=[32375], 99.50th=[35390], 99.90th=[36963], 99.95th=[36963], 00:37:06.237 | 99.99th=[36963] 00:37:06.237 bw ( KiB/s): min= 2288, max= 2560, per=4.16%, avg=2423.00, stdev=108.99, samples=19 00:37:06.237 iops : min= 572, max= 640, avg=605.68, stdev=27.26, samples=19 00:37:06.237 lat (msec) : 20=0.86%, 50=99.14% 00:37:06.237 cpu : usr=97.99%, sys=1.35%, ctx=90, majf=0, minf=9 00:37:06.237 IO depths : 1=4.9%, 2=9.8%, 4=20.3%, 8=56.5%, 16=8.5%, 32=0.0%, >=64=0.0% 00:37:06.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.237 complete : 0=0.0%, 4=93.0%, 8=2.1%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.237 issued rwts: total=6074,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.237 filename2: (groupid=0, jobs=1): err= 0: pid=1962063: Wed Nov 20 08:34:18 2024 00:37:06.237 read: IOPS=605, BW=2423KiB/s (2482kB/s)(23.7MiB/10009msec) 00:37:06.237 slat (nsec): min=4383, max=78867, avg=27999.68, stdev=18217.16 00:37:06.237 clat (usec): min=2369, max=52663, avg=26117.14, stdev=2309.95 00:37:06.237 lat (usec): min=2377, max=52677, avg=26145.14, stdev=2312.26 00:37:06.237 clat percentiles (usec): 00:37:06.237 | 1.00th=[23987], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:37:06.237 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:37:06.237 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28967], 95.00th=[29754], 00:37:06.237 | 99.00th=[30278], 99.50th=[32375], 99.90th=[44827], 99.95th=[44827], 00:37:06.237 | 99.99th=[52691] 00:37:06.237 bw ( KiB/s): min= 2176, max= 2560, per=4.15%, avg=2417.68, stdev=120.16, samples=19 00:37:06.237 iops : min= 544, max= 640, avg=604.32, stdev=30.09, samples=19 00:37:06.237 lat (msec) : 4=0.12%, 10=0.15%, 20=0.28%, 50=99.41%, 100=0.05% 00:37:06.237 cpu : usr=98.19%, sys=1.19%, ctx=81, majf=0, minf=9 00:37:06.237 IO depths : 1=5.7%, 2=11.8%, 4=24.9%, 8=50.7%, 16=6.8%, 32=0.0%, >=64=0.0% 00:37:06.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.237 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.237 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.237 filename2: (groupid=0, jobs=1): err= 0: pid=1962064: Wed Nov 20 08:34:18 2024 00:37:06.237 read: IOPS=605, BW=2423KiB/s (2481kB/s)(23.7MiB/10010msec) 00:37:06.237 slat (usec): min=4, max=109, avg=51.99, stdev=23.98 00:37:06.237 clat (usec): min=9207, max=51644, avg=25906.38, stdev=2379.59 00:37:06.237 lat (usec): min=9217, max=51659, avg=25958.38, stdev=2381.22 00:37:06.237 clat percentiles (usec): 00:37:06.237 | 1.00th=[23725], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:37:06.237 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[25822], 00:37:06.237 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28705], 95.00th=[29492], 00:37:06.237 | 99.00th=[30016], 99.50th=[30278], 99.90th=[51643], 99.95th=[51643], 00:37:06.237 | 99.99th=[51643] 00:37:06.237 bw ( KiB/s): min= 2180, max= 2688, per=4.14%, avg=2411.16, stdev=121.74, samples=19 00:37:06.237 iops : min= 545, max= 672, avg=602.68, stdev=30.39, samples=19 00:37:06.237 lat (msec) : 10=0.26%, 20=0.26%, 50=99.21%, 100=0.26% 00:37:06.237 cpu : usr=98.57%, sys=0.78%, ctx=80, majf=0, minf=9 00:37:06.237 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:06.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.237 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.237 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.237 filename2: (groupid=0, jobs=1): err= 0: pid=1962065: Wed Nov 20 08:34:18 2024 00:37:06.237 read: IOPS=605, BW=2423KiB/s (2481kB/s)(23.7MiB/10011msec) 00:37:06.237 slat (nsec): min=4845, max=99681, avg=40389.80, stdev=19617.91 00:37:06.237 clat (usec): min=13608, max=36015, avg=26107.05, stdev=1854.93 00:37:06.237 lat (usec): min=13633, max=36029, avg=26147.44, stdev=1855.89 00:37:06.237 clat percentiles (usec): 00:37:06.237 | 1.00th=[23987], 5.00th=[24249], 10.00th=[24249], 20.00th=[24511], 00:37:06.237 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[26084], 00:37:06.237 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28967], 95.00th=[29754], 00:37:06.237 | 99.00th=[30016], 99.50th=[30278], 99.90th=[35914], 99.95th=[35914], 00:37:06.237 | 99.99th=[35914] 00:37:06.237 bw ( KiB/s): min= 2299, max= 2560, per=4.15%, avg=2417.95, stdev=103.46, samples=19 00:37:06.237 iops : min= 574, max= 640, avg=604.42, stdev=25.88, samples=19 00:37:06.237 lat (msec) : 20=0.26%, 50=99.74% 00:37:06.237 cpu : usr=98.49%, sys=1.05%, ctx=50, majf=0, minf=9 00:37:06.237 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:06.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.237 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.237 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.237 filename2: (groupid=0, jobs=1): err= 0: pid=1962066: Wed Nov 20 08:34:18 2024 00:37:06.237 read: IOPS=605, BW=2423KiB/s (2482kB/s)(23.7MiB/10009msec) 00:37:06.237 slat (usec): min=4, max=172, avg=49.65, stdev=20.76 00:37:06.237 clat (usec): min=9456, max=50544, avg=25951.83, stdev=2341.20 00:37:06.237 lat (usec): min=9484, max=50560, avg=26001.48, stdev=2342.68 00:37:06.237 clat percentiles (usec): 00:37:06.237 | 1.00th=[23725], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:37:06.237 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:37:06.237 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28705], 95.00th=[29492], 00:37:06.237 | 99.00th=[30016], 99.50th=[30278], 99.90th=[50594], 99.95th=[50594], 00:37:06.237 | 99.99th=[50594] 00:37:06.237 bw ( KiB/s): min= 2176, max= 2688, per=4.14%, avg=2411.42, stdev=135.91, samples=19 00:37:06.237 iops : min= 544, max= 672, avg=602.79, stdev=33.95, samples=19 00:37:06.237 lat (msec) : 10=0.26%, 20=0.26%, 50=99.21%, 100=0.26% 00:37:06.237 cpu : usr=98.08%, sys=1.30%, ctx=105, majf=0, minf=9 00:37:06.237 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:37:06.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.237 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.237 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.237 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.237 filename2: (groupid=0, jobs=1): err= 0: pid=1962067: Wed Nov 20 08:34:18 2024 00:37:06.237 read: IOPS=608, BW=2434KiB/s (2492kB/s)(23.8MiB/10019msec) 00:37:06.237 slat (usec): min=7, max=107, avg=50.67, stdev=24.29 00:37:06.237 clat (usec): min=11519, max=30425, avg=25886.24, stdev=2074.14 00:37:06.237 lat (usec): min=11533, max=30441, avg=25936.90, stdev=2075.98 00:37:06.237 clat percentiles (usec): 00:37:06.237 | 1.00th=[18482], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:37:06.237 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:37:06.237 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28705], 95.00th=[29492], 00:37:06.237 | 99.00th=[30016], 99.50th=[30278], 99.90th=[30278], 99.95th=[30278], 00:37:06.237 | 99.99th=[30540] 00:37:06.237 bw ( KiB/s): min= 2299, max= 2688, per=4.18%, avg=2431.45, stdev=130.98, samples=20 00:37:06.237 iops : min= 574, max= 672, avg=607.80, stdev=32.73, samples=20 00:37:06.238 lat (msec) : 20=1.02%, 50=98.98% 00:37:06.238 cpu : usr=98.55%, sys=1.07%, ctx=56, majf=0, minf=9 00:37:06.238 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.238 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.238 issued rwts: total=6096,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.238 filename2: (groupid=0, jobs=1): err= 0: pid=1962068: Wed Nov 20 08:34:18 2024 00:37:06.238 read: IOPS=605, BW=2424KiB/s (2482kB/s)(23.7MiB/10007msec) 00:37:06.238 slat (usec): min=10, max=117, avg=53.91, stdev=22.38 00:37:06.238 clat (usec): min=9244, max=53423, avg=25913.79, stdev=2318.09 00:37:06.238 lat (usec): min=9264, max=53460, avg=25967.70, stdev=2319.91 00:37:06.238 clat percentiles (usec): 00:37:06.238 | 1.00th=[23725], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:37:06.238 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[25822], 00:37:06.238 | 70.00th=[26346], 80.00th=[27657], 90.00th=[28705], 95.00th=[29492], 00:37:06.238 | 99.00th=[30016], 99.50th=[30278], 99.90th=[49021], 99.95th=[49021], 00:37:06.238 | 99.99th=[53216] 00:37:06.238 bw ( KiB/s): min= 2176, max= 2688, per=4.14%, avg=2411.21, stdev=143.07, samples=19 00:37:06.238 iops : min= 544, max= 672, avg=602.74, stdev=35.77, samples=19 00:37:06.238 lat (msec) : 10=0.26%, 20=0.26%, 50=99.44%, 100=0.03% 00:37:06.238 cpu : usr=98.51%, sys=1.00%, ctx=67, majf=0, minf=9 00:37:06.238 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:37:06.238 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.238 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:06.238 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:06.238 latency : target=0, window=0, percentile=100.00%, depth=16 00:37:06.238 00:37:06.238 Run status group 0 (all jobs): 00:37:06.238 READ: bw=56.8MiB/s (59.6MB/s), 2422KiB/s-2437KiB/s (2480kB/s-2496kB/s), io=570MiB (597MB), run=10005-10024msec 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.238 bdev_null0 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.238 [2024-11-20 08:34:19.107273] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.238 bdev_null1 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:06.238 08:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:06.239 { 00:37:06.239 "params": { 00:37:06.239 "name": "Nvme$subsystem", 00:37:06.239 "trtype": "$TEST_TRANSPORT", 00:37:06.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:06.239 "adrfam": "ipv4", 00:37:06.239 "trsvcid": "$NVMF_PORT", 00:37:06.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:06.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:06.239 "hdgst": ${hdgst:-false}, 00:37:06.239 "ddgst": ${ddgst:-false} 00:37:06.239 }, 00:37:06.239 "method": "bdev_nvme_attach_controller" 00:37:06.239 } 00:37:06.239 EOF 00:37:06.239 )") 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:06.239 { 00:37:06.239 "params": { 00:37:06.239 "name": "Nvme$subsystem", 00:37:06.239 "trtype": "$TEST_TRANSPORT", 00:37:06.239 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:06.239 "adrfam": "ipv4", 00:37:06.239 "trsvcid": "$NVMF_PORT", 00:37:06.239 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:06.239 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:06.239 "hdgst": ${hdgst:-false}, 00:37:06.239 "ddgst": ${ddgst:-false} 00:37:06.239 }, 00:37:06.239 "method": "bdev_nvme_attach_controller" 00:37:06.239 } 00:37:06.239 EOF 00:37:06.239 )") 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:37:06.239 "params": { 00:37:06.239 "name": "Nvme0", 00:37:06.239 "trtype": "tcp", 00:37:06.239 "traddr": "10.0.0.2", 00:37:06.239 "adrfam": "ipv4", 00:37:06.239 "trsvcid": "4420", 00:37:06.239 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:06.239 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:06.239 "hdgst": false, 00:37:06.239 "ddgst": false 00:37:06.239 }, 00:37:06.239 "method": "bdev_nvme_attach_controller" 00:37:06.239 },{ 00:37:06.239 "params": { 00:37:06.239 "name": "Nvme1", 00:37:06.239 "trtype": "tcp", 00:37:06.239 "traddr": "10.0.0.2", 00:37:06.239 "adrfam": "ipv4", 00:37:06.239 "trsvcid": "4420", 00:37:06.239 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:37:06.239 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:37:06.239 "hdgst": false, 00:37:06.239 "ddgst": false 00:37:06.239 }, 00:37:06.239 "method": "bdev_nvme_attach_controller" 00:37:06.239 }' 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:06.239 08:34:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:06.239 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:06.239 ... 00:37:06.239 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:37:06.239 ... 00:37:06.239 fio-3.35 00:37:06.239 Starting 4 threads 00:37:11.508 00:37:11.508 filename0: (groupid=0, jobs=1): err= 0: pid=1964029: Wed Nov 20 08:34:25 2024 00:37:11.508 read: IOPS=2802, BW=21.9MiB/s (23.0MB/s)(110MiB/5002msec) 00:37:11.508 slat (nsec): min=5942, max=56160, avg=8671.52, stdev=3150.93 00:37:11.508 clat (usec): min=637, max=5559, avg=2828.26, stdev=394.97 00:37:11.508 lat (usec): min=648, max=5572, avg=2836.93, stdev=394.99 00:37:11.508 clat percentiles (usec): 00:37:11.508 | 1.00th=[ 1745], 5.00th=[ 2212], 10.00th=[ 2343], 20.00th=[ 2540], 00:37:11.508 | 30.00th=[ 2671], 40.00th=[ 2769], 50.00th=[ 2900], 60.00th=[ 2966], 00:37:11.508 | 70.00th=[ 2966], 80.00th=[ 3032], 90.00th=[ 3195], 95.00th=[ 3392], 00:37:11.508 | 99.00th=[ 4015], 99.50th=[ 4359], 99.90th=[ 5080], 99.95th=[ 5211], 00:37:11.508 | 99.99th=[ 5342] 00:37:11.508 bw ( KiB/s): min=21707, max=23360, per=26.08%, avg=22292.78, stdev=539.77, samples=9 00:37:11.508 iops : min= 2713, max= 2920, avg=2786.56, stdev=67.52, samples=9 00:37:11.508 lat (usec) : 750=0.01%, 1000=0.04% 00:37:11.508 lat (msec) : 2=2.22%, 4=96.71%, 10=1.03% 00:37:11.508 cpu : usr=95.62%, sys=4.08%, ctx=10, majf=0, minf=0 00:37:11.508 IO depths : 1=0.4%, 2=6.6%, 4=64.9%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:11.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.508 complete : 0=0.0%, 4=92.9%, 8=7.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.508 issued rwts: total=14017,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.508 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:11.508 filename0: (groupid=0, jobs=1): err= 0: pid=1964030: Wed Nov 20 08:34:25 2024 00:37:11.508 read: IOPS=2583, BW=20.2MiB/s (21.2MB/s)(101MiB/5001msec) 00:37:11.508 slat (nsec): min=5959, max=38797, avg=8646.64, stdev=3264.95 00:37:11.508 clat (usec): min=561, max=5562, avg=3070.56, stdev=442.67 00:37:11.508 lat (usec): min=573, max=5572, avg=3079.20, stdev=442.55 00:37:11.508 clat percentiles (usec): 00:37:11.508 | 1.00th=[ 2212], 5.00th=[ 2606], 10.00th=[ 2737], 20.00th=[ 2802], 00:37:11.508 | 30.00th=[ 2933], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2999], 00:37:11.508 | 70.00th=[ 3097], 80.00th=[ 3228], 90.00th=[ 3556], 95.00th=[ 3949], 00:37:11.508 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 5473], 99.95th=[ 5538], 00:37:11.508 | 99.99th=[ 5538] 00:37:11.508 bw ( KiB/s): min=19760, max=21648, per=24.31%, avg=20777.89, stdev=704.74, samples=9 00:37:11.508 iops : min= 2470, max= 2706, avg=2597.22, stdev=88.09, samples=9 00:37:11.508 lat (usec) : 750=0.05%, 1000=0.01% 00:37:11.508 lat (msec) : 2=0.29%, 4=94.87%, 10=4.78% 00:37:11.508 cpu : usr=95.66%, sys=4.04%, ctx=6, majf=0, minf=0 00:37:11.508 IO depths : 1=0.2%, 2=3.0%, 4=69.6%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:11.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.508 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.508 issued rwts: total=12921,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.508 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:11.508 filename1: (groupid=0, jobs=1): err= 0: pid=1964031: Wed Nov 20 08:34:25 2024 00:37:11.508 read: IOPS=2629, BW=20.5MiB/s (21.5MB/s)(103MiB/5001msec) 00:37:11.508 slat (nsec): min=5956, max=65710, avg=8882.02, stdev=3323.83 00:37:11.508 clat (usec): min=835, max=5566, avg=3015.33, stdev=492.08 00:37:11.508 lat (usec): min=842, max=5579, avg=3024.21, stdev=491.72 00:37:11.508 clat percentiles (usec): 00:37:11.508 | 1.00th=[ 2073], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2704], 00:37:11.508 | 30.00th=[ 2868], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2966], 00:37:11.508 | 70.00th=[ 3032], 80.00th=[ 3195], 90.00th=[ 3589], 95.00th=[ 4228], 00:37:11.508 | 99.00th=[ 4686], 99.50th=[ 4883], 99.90th=[ 5211], 99.95th=[ 5276], 00:37:11.508 | 99.99th=[ 5538] 00:37:11.508 bw ( KiB/s): min=20416, max=21712, per=24.52%, avg=20963.56, stdev=495.21, samples=9 00:37:11.508 iops : min= 2552, max= 2714, avg=2620.44, stdev=61.90, samples=9 00:37:11.508 lat (usec) : 1000=0.02% 00:37:11.508 lat (msec) : 2=0.65%, 4=92.75%, 10=6.58% 00:37:11.508 cpu : usr=95.80%, sys=3.88%, ctx=7, majf=0, minf=0 00:37:11.508 IO depths : 1=0.2%, 2=5.2%, 4=66.3%, 8=28.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:11.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.508 complete : 0=0.0%, 4=93.2%, 8=6.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.508 issued rwts: total=13152,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.508 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:11.508 filename1: (groupid=0, jobs=1): err= 0: pid=1964032: Wed Nov 20 08:34:25 2024 00:37:11.508 read: IOPS=2673, BW=20.9MiB/s (21.9MB/s)(105MiB/5004msec) 00:37:11.508 slat (nsec): min=5946, max=56691, avg=8890.11, stdev=3386.11 00:37:11.508 clat (usec): min=568, max=5277, avg=2964.74, stdev=417.06 00:37:11.508 lat (usec): min=583, max=5285, avg=2973.63, stdev=416.96 00:37:11.508 clat percentiles (usec): 00:37:11.508 | 1.00th=[ 1942], 5.00th=[ 2343], 10.00th=[ 2540], 20.00th=[ 2704], 00:37:11.508 | 30.00th=[ 2802], 40.00th=[ 2933], 50.00th=[ 2966], 60.00th=[ 2966], 00:37:11.508 | 70.00th=[ 3032], 80.00th=[ 3163], 90.00th=[ 3392], 95.00th=[ 3687], 00:37:11.508 | 99.00th=[ 4424], 99.50th=[ 4817], 99.90th=[ 5145], 99.95th=[ 5211], 00:37:11.508 | 99.99th=[ 5276] 00:37:11.508 bw ( KiB/s): min=20816, max=22080, per=25.18%, avg=21521.78, stdev=466.92, samples=9 00:37:11.508 iops : min= 2602, max= 2760, avg=2690.22, stdev=58.36, samples=9 00:37:11.508 lat (usec) : 750=0.01% 00:37:11.508 lat (msec) : 2=1.32%, 4=95.96%, 10=2.72% 00:37:11.508 cpu : usr=95.86%, sys=3.82%, ctx=7, majf=0, minf=0 00:37:11.508 IO depths : 1=0.2%, 2=5.4%, 4=66.3%, 8=28.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:11.508 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.508 complete : 0=0.0%, 4=92.7%, 8=7.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:11.508 issued rwts: total=13379,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:11.508 latency : target=0, window=0, percentile=100.00%, depth=8 00:37:11.508 00:37:11.508 Run status group 0 (all jobs): 00:37:11.508 READ: bw=83.5MiB/s (87.5MB/s), 20.2MiB/s-21.9MiB/s (21.2MB/s-23.0MB/s), io=418MiB (438MB), run=5001-5004msec 00:37:11.508 08:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:37:11.508 08:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:37:11.508 08:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:11.508 08:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:11.508 08:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:37:11.508 08:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:11.508 08:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.508 08:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.508 08:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.508 08:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:11.508 08:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.508 08:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.766 08:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.766 08:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:37:11.766 08:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:37:11.766 08:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:37:11.766 08:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:37:11.766 08:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.766 08:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.766 08:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.766 08:34:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:37:11.766 08:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.766 08:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.766 08:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.766 00:37:11.766 real 0m24.673s 00:37:11.766 user 4m52.998s 00:37:11.766 sys 0m5.165s 00:37:11.766 08:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:11.766 08:34:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:37:11.766 ************************************ 00:37:11.766 END TEST fio_dif_rand_params 00:37:11.766 ************************************ 00:37:11.766 08:34:25 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:37:11.766 08:34:25 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:11.766 08:34:25 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:11.766 08:34:25 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:11.766 ************************************ 00:37:11.766 START TEST fio_dif_digest 00:37:11.766 ************************************ 00:37:11.766 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:37:11.766 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:37:11.766 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:37:11.766 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:37:11.766 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:37:11.766 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:37:11.766 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:37:11.766 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:37:11.766 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:37:11.766 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:37:11.766 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:37:11.766 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:37:11.766 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:37:11.766 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:37:11.766 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:37:11.766 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:37:11.766 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:37:11.766 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:11.767 bdev_null0 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:11.767 [2024-11-20 08:34:25.662063] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # config=() 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # local subsystem config 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:37:11.767 { 00:37:11.767 "params": { 00:37:11.767 "name": "Nvme$subsystem", 00:37:11.767 "trtype": "$TEST_TRANSPORT", 00:37:11.767 "traddr": "$NVMF_FIRST_TARGET_IP", 00:37:11.767 "adrfam": "ipv4", 00:37:11.767 "trsvcid": "$NVMF_PORT", 00:37:11.767 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:37:11.767 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:37:11.767 "hdgst": ${hdgst:-false}, 00:37:11.767 "ddgst": ${ddgst:-false} 00:37:11.767 }, 00:37:11.767 "method": "bdev_nvme_attach_controller" 00:37:11.767 } 00:37:11.767 EOF 00:37:11.767 )") 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # cat 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@396 -- # jq . 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@397 -- # IFS=, 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:37:11.767 "params": { 00:37:11.767 "name": "Nvme0", 00:37:11.767 "trtype": "tcp", 00:37:11.767 "traddr": "10.0.0.2", 00:37:11.767 "adrfam": "ipv4", 00:37:11.767 "trsvcid": "4420", 00:37:11.767 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:11.767 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:11.767 "hdgst": true, 00:37:11.767 "ddgst": true 00:37:11.767 }, 00:37:11.767 "method": "bdev_nvme_attach_controller" 00:37:11.767 }' 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:37:11.767 08:34:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:37:12.024 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:37:12.024 ... 00:37:12.024 fio-3.35 00:37:12.024 Starting 3 threads 00:37:24.230 00:37:24.230 filename0: (groupid=0, jobs=1): err= 0: pid=1965296: Wed Nov 20 08:34:36 2024 00:37:24.230 read: IOPS=305, BW=38.2MiB/s (40.0MB/s)(384MiB/10045msec) 00:37:24.230 slat (nsec): min=6249, max=51272, avg=16418.43, stdev=6424.00 00:37:24.230 clat (usec): min=7010, max=52173, avg=9789.43, stdev=1713.44 00:37:24.230 lat (usec): min=7029, max=52184, avg=9805.85, stdev=1712.84 00:37:24.230 clat percentiles (usec): 00:37:24.230 | 1.00th=[ 8160], 5.00th=[ 8586], 10.00th=[ 8848], 20.00th=[ 9110], 00:37:24.230 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:37:24.230 | 70.00th=[10028], 80.00th=[10290], 90.00th=[10552], 95.00th=[10814], 00:37:24.230 | 99.00th=[11469], 99.50th=[11731], 99.90th=[47973], 99.95th=[47973], 00:37:24.230 | 99.99th=[52167] 00:37:24.230 bw ( KiB/s): min=35840, max=40704, per=35.57%, avg=39244.80, stdev=1157.15, samples=20 00:37:24.230 iops : min= 280, max= 318, avg=306.60, stdev= 9.04, samples=20 00:37:24.230 lat (msec) : 10=65.19%, 20=34.65%, 50=0.13%, 100=0.03% 00:37:24.230 cpu : usr=95.96%, sys=3.71%, ctx=20, majf=0, minf=43 00:37:24.230 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:24.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.230 issued rwts: total=3068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:24.230 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:24.230 filename0: (groupid=0, jobs=1): err= 0: pid=1965297: Wed Nov 20 08:34:36 2024 00:37:24.230 read: IOPS=274, BW=34.3MiB/s (35.9MB/s)(344MiB/10043msec) 00:37:24.230 slat (nsec): min=6103, max=48795, avg=14716.35, stdev=6602.30 00:37:24.230 clat (usec): min=7824, max=48301, avg=10912.64, stdev=1718.67 00:37:24.230 lat (usec): min=7836, max=48326, avg=10927.36, stdev=1718.71 00:37:24.230 clat percentiles (usec): 00:37:24.230 | 1.00th=[ 9241], 5.00th=[ 9634], 10.00th=[10028], 20.00th=[10290], 00:37:24.230 | 30.00th=[10421], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:37:24.230 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11863], 95.00th=[12125], 00:37:24.230 | 99.00th=[12649], 99.50th=[12911], 99.90th=[48497], 99.95th=[48497], 00:37:24.230 | 99.99th=[48497] 00:37:24.230 bw ( KiB/s): min=32256, max=36608, per=31.91%, avg=35212.80, stdev=945.09, samples=20 00:37:24.230 iops : min= 252, max= 286, avg=275.10, stdev= 7.38, samples=20 00:37:24.230 lat (msec) : 10=10.90%, 20=88.92%, 50=0.18% 00:37:24.230 cpu : usr=95.28%, sys=4.41%, ctx=18, majf=0, minf=92 00:37:24.230 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:24.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.230 issued rwts: total=2753,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:24.230 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:24.230 filename0: (groupid=0, jobs=1): err= 0: pid=1965298: Wed Nov 20 08:34:36 2024 00:37:24.230 read: IOPS=282, BW=35.3MiB/s (37.0MB/s)(355MiB/10043msec) 00:37:24.230 slat (nsec): min=6228, max=46026, avg=15192.00, stdev=6978.51 00:37:24.230 clat (usec): min=3341, max=50845, avg=10584.30, stdev=1330.03 00:37:24.230 lat (usec): min=3348, max=50851, avg=10599.50, stdev=1329.70 00:37:24.230 clat percentiles (usec): 00:37:24.230 | 1.00th=[ 8160], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[10028], 00:37:24.230 | 30.00th=[10290], 40.00th=[10421], 50.00th=[10552], 60.00th=[10814], 00:37:24.230 | 70.00th=[10945], 80.00th=[11207], 90.00th=[11469], 95.00th=[11863], 00:37:24.230 | 99.00th=[12518], 99.50th=[12780], 99.90th=[13829], 99.95th=[43254], 00:37:24.230 | 99.99th=[50594] 00:37:24.230 bw ( KiB/s): min=34816, max=39424, per=32.90%, avg=36300.80, stdev=1022.65, samples=20 00:37:24.230 iops : min= 272, max= 308, avg=283.60, stdev= 7.99, samples=20 00:37:24.230 lat (msec) : 4=0.28%, 10=18.89%, 20=80.76%, 50=0.04%, 100=0.04% 00:37:24.230 cpu : usr=95.01%, sys=4.68%, ctx=26, majf=0, minf=46 00:37:24.230 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:37:24.230 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.230 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:37:24.230 issued rwts: total=2838,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:37:24.230 latency : target=0, window=0, percentile=100.00%, depth=3 00:37:24.230 00:37:24.230 Run status group 0 (all jobs): 00:37:24.230 READ: bw=108MiB/s (113MB/s), 34.3MiB/s-38.2MiB/s (35.9MB/s-40.0MB/s), io=1082MiB (1135MB), run=10043-10045msec 00:37:24.230 08:34:36 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:37:24.230 08:34:36 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:37:24.230 08:34:36 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:37:24.230 08:34:36 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:37:24.230 08:34:36 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:37:24.230 08:34:36 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:24.230 08:34:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.230 08:34:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:24.230 08:34:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.230 08:34:36 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:37:24.230 08:34:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:24.230 08:34:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:24.230 08:34:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:24.230 00:37:24.230 real 0m11.306s 00:37:24.230 user 0m35.990s 00:37:24.230 sys 0m1.657s 00:37:24.230 08:34:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:24.230 08:34:36 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:37:24.230 ************************************ 00:37:24.230 END TEST fio_dif_digest 00:37:24.230 ************************************ 00:37:24.230 08:34:36 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:37:24.230 08:34:36 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:37:24.230 08:34:36 nvmf_dif -- nvmf/common.sh@335 -- # nvmfcleanup 00:37:24.230 08:34:36 nvmf_dif -- nvmf/common.sh@99 -- # sync 00:37:24.230 08:34:36 nvmf_dif -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:37:24.230 08:34:36 nvmf_dif -- nvmf/common.sh@102 -- # set +e 00:37:24.230 08:34:36 nvmf_dif -- nvmf/common.sh@103 -- # for i in {1..20} 00:37:24.230 08:34:36 nvmf_dif -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:37:24.231 rmmod nvme_tcp 00:37:24.231 rmmod nvme_fabrics 00:37:24.231 rmmod nvme_keyring 00:37:24.231 08:34:37 nvmf_dif -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:37:24.231 08:34:37 nvmf_dif -- nvmf/common.sh@106 -- # set -e 00:37:24.231 08:34:37 nvmf_dif -- nvmf/common.sh@107 -- # return 0 00:37:24.231 08:34:37 nvmf_dif -- nvmf/common.sh@336 -- # '[' -n 1956692 ']' 00:37:24.231 08:34:37 nvmf_dif -- nvmf/common.sh@337 -- # killprocess 1956692 00:37:24.231 08:34:37 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 1956692 ']' 00:37:24.231 08:34:37 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 1956692 00:37:24.231 08:34:37 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:37:24.231 08:34:37 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:24.231 08:34:37 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1956692 00:37:24.231 08:34:37 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:24.231 08:34:37 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:24.231 08:34:37 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1956692' 00:37:24.231 killing process with pid 1956692 00:37:24.231 08:34:37 nvmf_dif -- common/autotest_common.sh@973 -- # kill 1956692 00:37:24.231 08:34:37 nvmf_dif -- common/autotest_common.sh@978 -- # wait 1956692 00:37:24.231 08:34:37 nvmf_dif -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:37:24.231 08:34:37 nvmf_dif -- nvmf/common.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:26.137 Waiting for block devices as requested 00:37:26.137 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:37:26.137 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:26.137 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:26.396 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:26.396 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:26.396 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:26.655 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:26.655 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:26.655 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:26.915 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:26.915 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:26.915 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:26.915 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:27.175 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:27.175 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:27.175 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:27.434 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:27.434 08:34:41 nvmf_dif -- nvmf/common.sh@342 -- # nvmf_fini 00:37:27.434 08:34:41 nvmf_dif -- nvmf/setup.sh@254 -- # local dev 00:37:27.434 08:34:41 nvmf_dif -- nvmf/setup.sh@257 -- # remove_target_ns 00:37:27.434 08:34:41 nvmf_dif -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:27.434 08:34:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:37:27.434 08:34:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@258 -- # delete_main_bridge 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@121 -- # return 0 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@41 -- # _dev=0 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@41 -- # dev_map=() 00:37:29.971 08:34:43 nvmf_dif -- nvmf/setup.sh@274 -- # iptr 00:37:29.971 08:34:43 nvmf_dif -- nvmf/common.sh@548 -- # iptables-save 00:37:29.971 08:34:43 nvmf_dif -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:37:29.971 08:34:43 nvmf_dif -- nvmf/common.sh@548 -- # iptables-restore 00:37:29.971 00:37:29.971 real 1m14.733s 00:37:29.971 user 7m11.427s 00:37:29.971 sys 0m20.756s 00:37:29.971 08:34:43 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:29.971 08:34:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:37:29.971 ************************************ 00:37:29.971 END TEST nvmf_dif 00:37:29.971 ************************************ 00:37:29.971 08:34:43 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:29.971 08:34:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:29.971 08:34:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:29.971 08:34:43 -- common/autotest_common.sh@10 -- # set +x 00:37:29.971 ************************************ 00:37:29.971 START TEST nvmf_abort_qd_sizes 00:37:29.971 ************************************ 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:37:29.971 * Looking for test storage... 00:37:29.971 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:37:29.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.971 --rc genhtml_branch_coverage=1 00:37:29.971 --rc genhtml_function_coverage=1 00:37:29.971 --rc genhtml_legend=1 00:37:29.971 --rc geninfo_all_blocks=1 00:37:29.971 --rc geninfo_unexecuted_blocks=1 00:37:29.971 00:37:29.971 ' 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:37:29.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.971 --rc genhtml_branch_coverage=1 00:37:29.971 --rc genhtml_function_coverage=1 00:37:29.971 --rc genhtml_legend=1 00:37:29.971 --rc geninfo_all_blocks=1 00:37:29.971 --rc geninfo_unexecuted_blocks=1 00:37:29.971 00:37:29.971 ' 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:37:29.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.971 --rc genhtml_branch_coverage=1 00:37:29.971 --rc genhtml_function_coverage=1 00:37:29.971 --rc genhtml_legend=1 00:37:29.971 --rc geninfo_all_blocks=1 00:37:29.971 --rc geninfo_unexecuted_blocks=1 00:37:29.971 00:37:29.971 ' 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:37:29.971 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:29.971 --rc genhtml_branch_coverage=1 00:37:29.971 --rc genhtml_function_coverage=1 00:37:29.971 --rc genhtml_legend=1 00:37:29.971 --rc geninfo_all_blocks=1 00:37:29.971 --rc geninfo_unexecuted_blocks=1 00:37:29.971 00:37:29.971 ' 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:29.971 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@50 -- # : 0 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:37:29.972 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@54 -- # have_pci_nics=0 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # prepare_net_devs 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # local -g is_hw=no 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # remove_target_ns 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # xtrace_disable 00:37:29.972 08:34:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@131 -- # pci_devs=() 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@131 -- # local -a pci_devs 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@132 -- # pci_net_devs=() 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@133 -- # pci_drivers=() 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@133 -- # local -A pci_drivers 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@135 -- # net_devs=() 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@135 -- # local -ga net_devs 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@136 -- # e810=() 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@136 -- # local -ga e810 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@137 -- # x722=() 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@137 -- # local -ga x722 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@138 -- # mlx=() 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@138 -- # local -ga mlx 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:35.251 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:37:35.251 Found 0000:86:00.0 (0x8086 - 0x159b) 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:37:35.252 Found 0000:86:00.1 (0x8086 - 0x159b) 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:37:35.252 Found net devices under 0000:86:00.0: cvl_0_0 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # [[ up == up ]] 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:37:35.252 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:37:35.253 Found net devices under 0000:86:00.1: cvl_0_1 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # is_hw=yes 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@245 -- # local total_initiator_target_pairs=1 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@247 -- # create_target_ns 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@133 -- # local ns=nvmf_ns_spdk 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@135 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@136 -- # ip netns add nvmf_ns_spdk 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@137 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@139 -- # set_up lo NVMF_TARGET_NS_CMD 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@248 -- # setup_interfaces 1 phy 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@27 -- # local -gA dev_map 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@28 -- # local -g _dev 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # ips=() 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@48 -- # ips=("$ip" $((++ip))) 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # [[ tcp == tcp ]] 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@50 -- # _ns=NVMF_TARGET_NS_CMD 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@52 -- # [[ phy == phy ]] 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@55 -- # initiator=cvl_0_0 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@55 -- # target=cvl_0_1 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@58 -- # [[ phy == veth ]] 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@59 -- # [[ phy == veth ]] 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # [[ tcp == tcp ]] 00:37:35.253 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # add_to_ns cvl_0_1 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@143 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@144 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@63 -- # set_ip cvl_0_0 167772161 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n '' ]] 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772161 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772161 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.1 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.1 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # tee /sys/class/net/cvl_0_0/ifalias 00:37:35.514 10.0.0.1 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@64 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@194 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@195 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # val_to_ip 167772162 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772162 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@197 -- # ip=10.0.0.2 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@198 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # echo 10.0.0.2 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@200 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:37:35.514 10.0.0.2 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@66 -- # set_up cvl_0_0 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 in_ns= 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval ' ip link set cvl_0_0 up' 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip link set cvl_0_0 up 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@67 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@69 -- # [[ phy == veth ]] 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # [[ phy == veth ]] 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@72 -- # [[ tcp == tcp ]] 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@73 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@547 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["initiator$id"]=cvl_0_0 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # dev_map["target$id"]=cvl_0_1 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@38 -- # ping_ips 1 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@87 -- # local pairs=1 pair 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair = 0 )) 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # get_initiator_ip_address initiator0 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:37:35.514 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:35.514 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.361 ms 00:37:35.514 00:37:35.514 --- 10.0.0.1 ping statistics --- 00:37:35.514 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.514 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # get_target_ip_address target0 NVMF_TARGET_NS_CMD 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:35.514 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target0 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target0 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@91 -- # ping_ip 10.0.0.2 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@80 -- # local ip=10.0.0.2 in_ns= count=1 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ -n '' ]] 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # eval ' ping -c 1 10.0.0.2' 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@83 -- # ping -c 1 10.0.0.2 00:37:35.773 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:35.773 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:37:35.773 00:37:35.773 --- 10.0.0.2 ping statistics --- 00:37:35.773 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:35.773 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair++ )) 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # (( pair < pairs )) 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/setup.sh@250 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # return 0 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:37:35.773 08:34:49 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:37:38.309 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:38.309 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:38.568 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:38.568 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:38.568 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:38.568 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:38.568 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:38.568 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:38.568 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:37:38.568 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:37:38.568 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:37:38.568 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:37:38.568 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:37:38.568 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:37:38.568 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:37:38.568 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:37:39.948 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:37:39.948 08:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:37:39.948 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@321 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:37:39.948 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@322 -- # NVMF_TARGET_INTERFACE2= 00:37:39.948 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@324 -- # get_initiator_ip_address 00:37:39.948 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:39.948 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@324 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@325 -- # get_initiator_ip_address initiator1 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@174 -- # get_ip_address initiator1 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=initiator1 in_ns= ip 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev initiator1 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=initiator1 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n initiator1 ]] 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # return 1 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev= 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@160 -- # return 0 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@325 -- # NVMF_SECOND_INITIATOR_IP= 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@327 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@333 -- # get_tcp_target_ip_address 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@186 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target0 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target0 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target0 ]] 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n cvl_0_1 ]] 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@101 -- # echo cvl_0_1 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev=cvl_0_1 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:37:39.949 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@163 -- # ip=10.0.0.2 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.2 ]] 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # echo 10.0.0.2 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@333 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@334 -- # get_tcp_target_ip_address target1 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@186 -- # get_target_ip_address target1 NVMF_TARGET_NS_CMD 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@170 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@156 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@157 -- # local -n ns=NVMF_TARGET_NS_CMD 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # get_net_dev target1 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # local dev=target1 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n target1 ]] 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # [[ -n '' ]] 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # return 1 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@159 -- # dev= 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@160 -- # return 0 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@334 -- # NVMF_SECOND_TARGET_IP= 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@338 -- # [[ tcp == rdma ]] 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/setup.sh@343 -- # RDMA_IP_LIST='10.0.0.2 00:37:40.208 ' 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:37:40.208 08:34:53 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:37:40.209 08:34:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:37:40.209 08:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:37:40.209 08:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:40.209 08:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:40.209 08:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # nvmfpid=1973160 00:37:40.209 08:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:37:40.209 08:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # waitforlisten 1973160 00:37:40.209 08:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 1973160 ']' 00:37:40.209 08:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:40.209 08:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:40.209 08:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:40.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:40.209 08:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:40.209 08:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:40.209 [2024-11-20 08:34:54.081099] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:37:40.209 [2024-11-20 08:34:54.081141] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:40.209 [2024-11-20 08:34:54.157344] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:37:40.209 [2024-11-20 08:34:54.200343] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:40.209 [2024-11-20 08:34:54.200379] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:40.209 [2024-11-20 08:34:54.200387] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:40.209 [2024-11-20 08:34:54.200392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:40.209 [2024-11-20 08:34:54.200397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:40.209 [2024-11-20 08:34:54.201920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:37:40.209 [2024-11-20 08:34:54.202029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:37:40.209 [2024-11-20 08:34:54.202140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:37:40.209 [2024-11-20 08:34:54.202140] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:40.468 08:34:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:40.468 ************************************ 00:37:40.468 START TEST spdk_target_abort 00:37:40.468 ************************************ 00:37:40.468 08:34:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:37:40.468 08:34:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:37:40.468 08:34:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:37:40.468 08:34:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:40.468 08:34:54 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:43.762 spdk_targetn1 00:37:43.762 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.762 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:37:43.762 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.762 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:43.762 [2024-11-20 08:34:57.217088] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:43.762 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.762 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:37:43.762 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.762 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:43.762 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.762 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:37:43.762 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.762 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:43.762 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.762 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:37:43.762 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:43.763 [2024-11-20 08:34:57.260408] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:43.763 08:34:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:47.052 Initializing NVMe Controllers 00:37:47.052 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:47.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:47.052 Initialization complete. Launching workers. 00:37:47.052 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15602, failed: 0 00:37:47.052 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1425, failed to submit 14177 00:37:47.052 success 686, unsuccessful 739, failed 0 00:37:47.052 08:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:47.052 08:35:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:50.368 Initializing NVMe Controllers 00:37:50.368 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:50.368 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:50.368 Initialization complete. Launching workers. 00:37:50.368 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8442, failed: 0 00:37:50.368 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1218, failed to submit 7224 00:37:50.368 success 311, unsuccessful 907, failed 0 00:37:50.368 08:35:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:50.368 08:35:03 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:53.673 Initializing NVMe Controllers 00:37:53.673 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:37:53.673 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:37:53.673 Initialization complete. Launching workers. 00:37:53.673 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38929, failed: 0 00:37:53.673 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2889, failed to submit 36040 00:37:53.673 success 611, unsuccessful 2278, failed 0 00:37:53.673 08:35:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:37:53.673 08:35:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.673 08:35:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:53.673 08:35:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:53.673 08:35:07 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:37:53.673 08:35:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:53.673 08:35:07 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:55.048 08:35:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:55.048 08:35:09 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1973160 00:37:55.048 08:35:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 1973160 ']' 00:37:55.048 08:35:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 1973160 00:37:55.048 08:35:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:37:55.048 08:35:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:55.048 08:35:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1973160 00:37:55.312 08:35:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:37:55.312 08:35:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:37:55.312 08:35:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1973160' 00:37:55.312 killing process with pid 1973160 00:37:55.312 08:35:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 1973160 00:37:55.312 08:35:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 1973160 00:37:55.312 00:37:55.312 real 0m14.882s 00:37:55.312 user 0m56.780s 00:37:55.312 sys 0m2.660s 00:37:55.312 08:35:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:55.312 08:35:09 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:37:55.312 ************************************ 00:37:55.312 END TEST spdk_target_abort 00:37:55.312 ************************************ 00:37:55.312 08:35:09 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:37:55.312 08:35:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:37:55.312 08:35:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:55.312 08:35:09 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:37:55.576 ************************************ 00:37:55.576 START TEST kernel_target_abort 00:37:55.576 ************************************ 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@540 -- # get_initiator_ip_address 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@174 -- # get_ip_address initiator0 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@156 -- # local dev=initiator0 in_ns= ip 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@157 -- # [[ -n '' ]] 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@159 -- # get_net_dev initiator0 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@98 -- # local dev=initiator0 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@100 -- # [[ -n initiator0 ]] 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@100 -- # [[ -n cvl_0_0 ]] 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@101 -- # echo cvl_0_0 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@159 -- # dev=cvl_0_0 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # cat /sys/class/net/cvl_0_0/ifalias 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@163 -- # ip=10.0.0.1 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@164 -- # [[ -n 10.0.0.1 ]] 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/setup.sh@166 -- # echo 10.0.0.1 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@441 -- # local block nvme 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@444 -- # modprobe nvmet 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:37:55.576 08:35:09 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:37:58.110 Waiting for block devices as requested 00:37:58.110 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:37:58.370 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:58.370 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:58.370 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:58.628 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:58.628 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:58.628 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:58.887 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:58.887 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:58.887 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:37:58.887 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:37:59.145 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:37:59.145 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:37:59.145 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:37:59.404 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:37:59.404 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:37:59.404 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:37:59.663 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:37:59.663 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:37:59.663 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:37:59.663 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:37:59.663 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:37:59.663 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:37:59.663 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:37:59.663 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:37:59.663 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:37:59.663 No valid GPT data, bailing 00:37:59.663 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:37:59.663 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@469 -- # echo 1 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@471 -- # echo 1 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@474 -- # echo tcp 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@475 -- # echo 4420 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@476 -- # echo ipv4 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:37:59.664 00:37:59.664 Discovery Log Number of Records 2, Generation counter 2 00:37:59.664 =====Discovery Log Entry 0====== 00:37:59.664 trtype: tcp 00:37:59.664 adrfam: ipv4 00:37:59.664 subtype: current discovery subsystem 00:37:59.664 treq: not specified, sq flow control disable supported 00:37:59.664 portid: 1 00:37:59.664 trsvcid: 4420 00:37:59.664 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:37:59.664 traddr: 10.0.0.1 00:37:59.664 eflags: none 00:37:59.664 sectype: none 00:37:59.664 =====Discovery Log Entry 1====== 00:37:59.664 trtype: tcp 00:37:59.664 adrfam: ipv4 00:37:59.664 subtype: nvme subsystem 00:37:59.664 treq: not specified, sq flow control disable supported 00:37:59.664 portid: 1 00:37:59.664 trsvcid: 4420 00:37:59.664 subnqn: nqn.2016-06.io.spdk:testnqn 00:37:59.664 traddr: 10.0.0.1 00:37:59.664 eflags: none 00:37:59.664 sectype: none 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:37:59.664 08:35:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:02.953 Initializing NVMe Controllers 00:38:02.954 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:02.954 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:02.954 Initialization complete. Launching workers. 00:38:02.954 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95277, failed: 0 00:38:02.954 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 95277, failed to submit 0 00:38:02.954 success 0, unsuccessful 95277, failed 0 00:38:02.954 08:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:02.954 08:35:16 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:06.243 Initializing NVMe Controllers 00:38:06.243 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:06.243 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:06.243 Initialization complete. Launching workers. 00:38:06.243 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 151658, failed: 0 00:38:06.243 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38326, failed to submit 113332 00:38:06.243 success 0, unsuccessful 38326, failed 0 00:38:06.243 08:35:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:38:06.243 08:35:19 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:38:09.534 Initializing NVMe Controllers 00:38:09.534 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:38:09.534 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:38:09.534 Initialization complete. Launching workers. 00:38:09.534 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 140248, failed: 0 00:38:09.534 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35122, failed to submit 105126 00:38:09.534 success 0, unsuccessful 35122, failed 0 00:38:09.534 08:35:22 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:38:09.534 08:35:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:38:09.534 08:35:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@488 -- # echo 0 00:38:09.534 08:35:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:09.534 08:35:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:38:09.534 08:35:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:38:09.534 08:35:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:38:09.534 08:35:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:38:09.534 08:35:22 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:38:09.534 08:35:23 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:38:12.071 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:12.071 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:12.071 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:12.071 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:12.071 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:12.071 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:12.071 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:12.071 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:12.071 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:38:12.071 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:38:12.071 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:38:12.071 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:38:12.071 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:38:12.071 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:38:12.071 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:38:12.071 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:38:13.452 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:38:13.452 00:38:13.452 real 0m18.078s 00:38:13.452 user 0m9.205s 00:38:13.452 sys 0m5.026s 00:38:13.452 08:35:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:13.452 08:35:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:38:13.452 ************************************ 00:38:13.452 END TEST kernel_target_abort 00:38:13.452 ************************************ 00:38:13.452 08:35:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:38:13.452 08:35:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:38:13.453 08:35:27 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # nvmfcleanup 00:38:13.453 08:35:27 nvmf_abort_qd_sizes -- nvmf/common.sh@99 -- # sync 00:38:13.453 08:35:27 nvmf_abort_qd_sizes -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:38:13.453 08:35:27 nvmf_abort_qd_sizes -- nvmf/common.sh@102 -- # set +e 00:38:13.453 08:35:27 nvmf_abort_qd_sizes -- nvmf/common.sh@103 -- # for i in {1..20} 00:38:13.453 08:35:27 nvmf_abort_qd_sizes -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:38:13.453 rmmod nvme_tcp 00:38:13.747 rmmod nvme_fabrics 00:38:13.747 rmmod nvme_keyring 00:38:13.747 08:35:27 nvmf_abort_qd_sizes -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:38:13.747 08:35:27 nvmf_abort_qd_sizes -- nvmf/common.sh@106 -- # set -e 00:38:13.747 08:35:27 nvmf_abort_qd_sizes -- nvmf/common.sh@107 -- # return 0 00:38:13.747 08:35:27 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # '[' -n 1973160 ']' 00:38:13.747 08:35:27 nvmf_abort_qd_sizes -- nvmf/common.sh@337 -- # killprocess 1973160 00:38:13.747 08:35:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 1973160 ']' 00:38:13.747 08:35:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 1973160 00:38:13.747 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (1973160) - No such process 00:38:13.747 08:35:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 1973160 is not found' 00:38:13.747 Process with pid 1973160 is not found 00:38:13.747 08:35:27 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:38:13.748 08:35:27 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:38:16.283 Waiting for block devices as requested 00:38:16.283 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:38:16.543 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:16.543 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:16.543 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:16.802 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:16.802 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:16.802 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:17.062 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:17.062 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:17.062 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:38:17.062 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:38:17.322 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:38:17.322 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:38:17.322 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:38:17.581 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:38:17.581 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:38:17.581 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:38:17.841 08:35:31 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # nvmf_fini 00:38:17.841 08:35:31 nvmf_abort_qd_sizes -- nvmf/setup.sh@254 -- # local dev 00:38:17.841 08:35:31 nvmf_abort_qd_sizes -- nvmf/setup.sh@257 -- # remove_target_ns 00:38:17.841 08:35:31 nvmf_abort_qd_sizes -- nvmf/setup.sh@313 -- # xtrace_disable_per_cmd _remove_target_ns 00:38:17.841 08:35:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:38:17.841 08:35:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@258 -- # delete_main_bridge 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@121 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@121 -- # return 0 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@269 -- # flush_ip cvl_0_0 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@211 -- # local dev=cvl_0_0 in_ns= 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_0' 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_0 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # for dev in "${dev_map[@]}" 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@261 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@265 -- # (( 4 == 3 )) 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@269 -- # flush_ip cvl_0_1 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@211 -- # local dev=cvl_0_1 in_ns= 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@212 -- # [[ -n '' ]] 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # eval ' ip addr flush dev cvl_0_1' 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # ip addr flush dev cvl_0_1 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@273 -- # reset_setup_interfaces 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # _dev=0 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # dev_map=() 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/setup.sh@274 -- # iptr 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # iptables-save 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # grep -v SPDK_NVMF 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- nvmf/common.sh@548 -- # iptables-restore 00:38:19.746 00:38:19.746 real 0m50.247s 00:38:19.746 user 1m10.403s 00:38:19.746 sys 0m16.460s 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:19.746 08:35:33 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:38:19.746 ************************************ 00:38:19.746 END TEST nvmf_abort_qd_sizes 00:38:19.746 ************************************ 00:38:19.747 08:35:33 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:19.747 08:35:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:19.747 08:35:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:19.747 08:35:33 -- common/autotest_common.sh@10 -- # set +x 00:38:20.006 ************************************ 00:38:20.006 START TEST keyring_file 00:38:20.006 ************************************ 00:38:20.006 08:35:33 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:38:20.006 * Looking for test storage... 00:38:20.006 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:20.006 08:35:33 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:20.006 08:35:33 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:38:20.006 08:35:33 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:20.006 08:35:33 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:20.006 08:35:33 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:20.006 08:35:33 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:20.006 08:35:33 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:20.006 08:35:33 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:38:20.006 08:35:33 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:38:20.006 08:35:33 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:38:20.006 08:35:33 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:38:20.006 08:35:33 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:38:20.006 08:35:33 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:38:20.006 08:35:33 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:38:20.006 08:35:33 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:20.006 08:35:33 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:38:20.006 08:35:33 keyring_file -- scripts/common.sh@345 -- # : 1 00:38:20.007 08:35:33 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:20.007 08:35:33 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:20.007 08:35:33 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:38:20.007 08:35:33 keyring_file -- scripts/common.sh@353 -- # local d=1 00:38:20.007 08:35:33 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:20.007 08:35:33 keyring_file -- scripts/common.sh@355 -- # echo 1 00:38:20.007 08:35:33 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:38:20.007 08:35:33 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:38:20.007 08:35:33 keyring_file -- scripts/common.sh@353 -- # local d=2 00:38:20.007 08:35:33 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:20.007 08:35:33 keyring_file -- scripts/common.sh@355 -- # echo 2 00:38:20.007 08:35:33 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:38:20.007 08:35:33 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:20.007 08:35:33 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:20.007 08:35:33 keyring_file -- scripts/common.sh@368 -- # return 0 00:38:20.007 08:35:33 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:20.007 08:35:33 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:20.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:20.007 --rc genhtml_branch_coverage=1 00:38:20.007 --rc genhtml_function_coverage=1 00:38:20.007 --rc genhtml_legend=1 00:38:20.007 --rc geninfo_all_blocks=1 00:38:20.007 --rc geninfo_unexecuted_blocks=1 00:38:20.007 00:38:20.007 ' 00:38:20.007 08:35:33 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:20.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:20.007 --rc genhtml_branch_coverage=1 00:38:20.007 --rc genhtml_function_coverage=1 00:38:20.007 --rc genhtml_legend=1 00:38:20.007 --rc geninfo_all_blocks=1 00:38:20.007 --rc geninfo_unexecuted_blocks=1 00:38:20.007 00:38:20.007 ' 00:38:20.007 08:35:33 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:20.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:20.007 --rc genhtml_branch_coverage=1 00:38:20.007 --rc genhtml_function_coverage=1 00:38:20.007 --rc genhtml_legend=1 00:38:20.007 --rc geninfo_all_blocks=1 00:38:20.007 --rc geninfo_unexecuted_blocks=1 00:38:20.007 00:38:20.007 ' 00:38:20.007 08:35:33 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:20.007 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:20.007 --rc genhtml_branch_coverage=1 00:38:20.007 --rc genhtml_function_coverage=1 00:38:20.007 --rc genhtml_legend=1 00:38:20.007 --rc geninfo_all_blocks=1 00:38:20.007 --rc geninfo_unexecuted_blocks=1 00:38:20.007 00:38:20.007 ' 00:38:20.007 08:35:33 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:20.007 08:35:33 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:20.007 08:35:33 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:38:20.007 08:35:33 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:20.007 08:35:33 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:20.007 08:35:33 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:20.007 08:35:33 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:20.007 08:35:33 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:20.007 08:35:33 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:20.007 08:35:33 keyring_file -- paths/export.sh@5 -- # export PATH 00:38:20.007 08:35:33 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:38:20.007 08:35:33 keyring_file -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:38:20.007 08:35:33 keyring_file -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:20.007 08:35:33 keyring_file -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@50 -- # : 0 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:38:20.007 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:38:20.007 08:35:33 keyring_file -- nvmf/common.sh@54 -- # have_pci_nics=0 00:38:20.007 08:35:34 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:20.007 08:35:34 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:20.007 08:35:34 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:20.007 08:35:34 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:38:20.007 08:35:34 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:38:20.007 08:35:34 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:38:20.007 08:35:34 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:20.007 08:35:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:20.007 08:35:34 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:20.007 08:35:34 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:20.007 08:35:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:20.007 08:35:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:20.007 08:35:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.rZL27WvNn2 00:38:20.007 08:35:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:20.007 08:35:34 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:20.008 08:35:34 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:38:20.008 08:35:34 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:38:20.008 08:35:34 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:38:20.008 08:35:34 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:38:20.008 08:35:34 keyring_file -- nvmf/common.sh@507 -- # python - 00:38:20.266 08:35:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.rZL27WvNn2 00:38:20.266 08:35:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.rZL27WvNn2 00:38:20.266 08:35:34 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.rZL27WvNn2 00:38:20.266 08:35:34 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:38:20.266 08:35:34 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:20.266 08:35:34 keyring_file -- keyring/common.sh@17 -- # name=key1 00:38:20.266 08:35:34 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:20.266 08:35:34 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:20.266 08:35:34 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:20.266 08:35:34 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SWSx3J7NUW 00:38:20.266 08:35:34 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:20.266 08:35:34 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:20.266 08:35:34 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:38:20.266 08:35:34 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:38:20.266 08:35:34 keyring_file -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:38:20.266 08:35:34 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:38:20.266 08:35:34 keyring_file -- nvmf/common.sh@507 -- # python - 00:38:20.266 08:35:34 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SWSx3J7NUW 00:38:20.266 08:35:34 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SWSx3J7NUW 00:38:20.266 08:35:34 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.SWSx3J7NUW 00:38:20.266 08:35:34 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:20.266 08:35:34 keyring_file -- keyring/file.sh@30 -- # tgtpid=1981923 00:38:20.266 08:35:34 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1981923 00:38:20.266 08:35:34 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1981923 ']' 00:38:20.266 08:35:34 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:20.266 08:35:34 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:20.266 08:35:34 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:20.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:20.266 08:35:34 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:20.266 08:35:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:20.266 [2024-11-20 08:35:34.146043] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:38:20.266 [2024-11-20 08:35:34.146087] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1981923 ] 00:38:20.266 [2024-11-20 08:35:34.217403] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.267 [2024-11-20 08:35:34.259398] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:20.525 08:35:34 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:20.525 [2024-11-20 08:35:34.470196] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:20.525 null0 00:38:20.525 [2024-11-20 08:35:34.502244] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:20.525 [2024-11-20 08:35:34.502623] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:20.525 08:35:34 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:20.525 [2024-11-20 08:35:34.530309] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:38:20.525 request: 00:38:20.525 { 00:38:20.525 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:38:20.525 "secure_channel": false, 00:38:20.525 "listen_address": { 00:38:20.525 "trtype": "tcp", 00:38:20.525 "traddr": "127.0.0.1", 00:38:20.525 "trsvcid": "4420" 00:38:20.525 }, 00:38:20.525 "method": "nvmf_subsystem_add_listener", 00:38:20.525 "req_id": 1 00:38:20.525 } 00:38:20.525 Got JSON-RPC error response 00:38:20.525 response: 00:38:20.525 { 00:38:20.525 "code": -32602, 00:38:20.525 "message": "Invalid parameters" 00:38:20.525 } 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:20.525 08:35:34 keyring_file -- keyring/file.sh@47 -- # bperfpid=1981929 00:38:20.525 08:35:34 keyring_file -- keyring/file.sh@49 -- # waitforlisten 1981929 /var/tmp/bperf.sock 00:38:20.525 08:35:34 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1981929 ']' 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:20.525 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:20.525 08:35:34 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:20.783 [2024-11-20 08:35:34.586112] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:38:20.783 [2024-11-20 08:35:34.586154] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1981929 ] 00:38:20.783 [2024-11-20 08:35:34.660601] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:20.783 [2024-11-20 08:35:34.702961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:20.783 08:35:34 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:20.783 08:35:34 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:20.783 08:35:34 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rZL27WvNn2 00:38:20.783 08:35:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rZL27WvNn2 00:38:21.041 08:35:34 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.SWSx3J7NUW 00:38:21.041 08:35:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.SWSx3J7NUW 00:38:21.299 08:35:35 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:38:21.299 08:35:35 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:38:21.299 08:35:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:21.299 08:35:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.299 08:35:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:21.557 08:35:35 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.rZL27WvNn2 == \/\t\m\p\/\t\m\p\.\r\Z\L\2\7\W\v\N\n\2 ]] 00:38:21.557 08:35:35 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:38:21.557 08:35:35 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:38:21.557 08:35:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.557 08:35:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:21.557 08:35:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:21.557 08:35:35 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.SWSx3J7NUW == \/\t\m\p\/\t\m\p\.\S\W\S\x\3\J\7\N\U\W ]] 00:38:21.557 08:35:35 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:38:21.557 08:35:35 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:21.557 08:35:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:21.557 08:35:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.557 08:35:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:21.557 08:35:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:21.815 08:35:35 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:38:21.815 08:35:35 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:38:21.815 08:35:35 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:21.815 08:35:35 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:21.815 08:35:35 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:21.815 08:35:35 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:21.815 08:35:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.074 08:35:35 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:38:22.074 08:35:35 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:22.074 08:35:35 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:22.333 [2024-11-20 08:35:36.152394] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:22.333 nvme0n1 00:38:22.333 08:35:36 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:38:22.333 08:35:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:22.333 08:35:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:22.333 08:35:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:22.333 08:35:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:22.333 08:35:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.591 08:35:36 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:38:22.591 08:35:36 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:38:22.591 08:35:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:22.591 08:35:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:22.591 08:35:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:22.591 08:35:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:22.591 08:35:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:22.849 08:35:36 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:38:22.849 08:35:36 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:22.849 Running I/O for 1 seconds... 00:38:23.785 19309.00 IOPS, 75.43 MiB/s 00:38:23.785 Latency(us) 00:38:23.785 [2024-11-20T07:35:37.813Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:23.785 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:38:23.785 nvme0n1 : 1.00 19357.07 75.61 0.00 0.00 6600.78 2605.84 16352.79 00:38:23.785 [2024-11-20T07:35:37.813Z] =================================================================================================================== 00:38:23.785 [2024-11-20T07:35:37.813Z] Total : 19357.07 75.61 0.00 0.00 6600.78 2605.84 16352.79 00:38:23.785 { 00:38:23.785 "results": [ 00:38:23.785 { 00:38:23.785 "job": "nvme0n1", 00:38:23.785 "core_mask": "0x2", 00:38:23.785 "workload": "randrw", 00:38:23.785 "percentage": 50, 00:38:23.785 "status": "finished", 00:38:23.785 "queue_depth": 128, 00:38:23.785 "io_size": 4096, 00:38:23.785 "runtime": 1.004181, 00:38:23.785 "iops": 19357.068098281088, 00:38:23.785 "mibps": 75.6135472589105, 00:38:23.785 "io_failed": 0, 00:38:23.785 "io_timeout": 0, 00:38:23.785 "avg_latency_us": 6600.77708014248, 00:38:23.785 "min_latency_us": 2605.8361904761905, 00:38:23.785 "max_latency_us": 16352.792380952382 00:38:23.785 } 00:38:23.785 ], 00:38:23.785 "core_count": 1 00:38:23.785 } 00:38:23.785 08:35:37 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:23.785 08:35:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:24.044 08:35:37 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:38:24.044 08:35:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:24.044 08:35:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:24.044 08:35:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.044 08:35:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:24.044 08:35:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.304 08:35:38 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:38:24.304 08:35:38 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:38:24.304 08:35:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:24.304 08:35:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:24.304 08:35:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.304 08:35:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:24.304 08:35:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.563 08:35:38 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:38:24.563 08:35:38 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:24.563 08:35:38 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:24.563 08:35:38 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:24.563 08:35:38 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:24.563 08:35:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:24.563 08:35:38 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:24.563 08:35:38 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:24.563 08:35:38 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:24.563 08:35:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:38:24.563 [2024-11-20 08:35:38.514209] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:24.563 [2024-11-20 08:35:38.514784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1820d00 (107): Transport endpoint is not connected 00:38:24.563 [2024-11-20 08:35:38.515781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1820d00 (9): Bad file descriptor 00:38:24.563 [2024-11-20 08:35:38.516782] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:24.563 [2024-11-20 08:35:38.516792] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:24.563 [2024-11-20 08:35:38.516799] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:24.563 [2024-11-20 08:35:38.516809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:24.563 request: 00:38:24.563 { 00:38:24.563 "name": "nvme0", 00:38:24.563 "trtype": "tcp", 00:38:24.563 "traddr": "127.0.0.1", 00:38:24.563 "adrfam": "ipv4", 00:38:24.563 "trsvcid": "4420", 00:38:24.563 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:24.563 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:24.563 "prchk_reftag": false, 00:38:24.563 "prchk_guard": false, 00:38:24.563 "hdgst": false, 00:38:24.563 "ddgst": false, 00:38:24.563 "psk": "key1", 00:38:24.563 "allow_unrecognized_csi": false, 00:38:24.563 "method": "bdev_nvme_attach_controller", 00:38:24.563 "req_id": 1 00:38:24.563 } 00:38:24.563 Got JSON-RPC error response 00:38:24.563 response: 00:38:24.563 { 00:38:24.563 "code": -5, 00:38:24.563 "message": "Input/output error" 00:38:24.563 } 00:38:24.563 08:35:38 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:24.563 08:35:38 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:24.563 08:35:38 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:24.563 08:35:38 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:24.564 08:35:38 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:38:24.564 08:35:38 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:24.564 08:35:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.564 08:35:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:24.564 08:35:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:24.564 08:35:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:24.822 08:35:38 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:38:24.822 08:35:38 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:38:24.822 08:35:38 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:24.822 08:35:38 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:24.822 08:35:38 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:24.822 08:35:38 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:24.822 08:35:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:25.081 08:35:38 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:38:25.081 08:35:38 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:38:25.081 08:35:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:25.338 08:35:39 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:38:25.338 08:35:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:38:25.338 08:35:39 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:38:25.338 08:35:39 keyring_file -- keyring/file.sh@78 -- # jq length 00:38:25.338 08:35:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:25.596 08:35:39 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:38:25.596 08:35:39 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.rZL27WvNn2 00:38:25.596 08:35:39 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.rZL27WvNn2 00:38:25.596 08:35:39 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:25.596 08:35:39 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.rZL27WvNn2 00:38:25.596 08:35:39 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:25.596 08:35:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:25.596 08:35:39 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:25.596 08:35:39 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:25.596 08:35:39 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rZL27WvNn2 00:38:25.596 08:35:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rZL27WvNn2 00:38:25.855 [2024-11-20 08:35:39.685883] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.rZL27WvNn2': 0100660 00:38:25.855 [2024-11-20 08:35:39.685908] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:38:25.855 request: 00:38:25.855 { 00:38:25.855 "name": "key0", 00:38:25.855 "path": "/tmp/tmp.rZL27WvNn2", 00:38:25.855 "method": "keyring_file_add_key", 00:38:25.855 "req_id": 1 00:38:25.855 } 00:38:25.855 Got JSON-RPC error response 00:38:25.855 response: 00:38:25.855 { 00:38:25.855 "code": -1, 00:38:25.855 "message": "Operation not permitted" 00:38:25.855 } 00:38:25.855 08:35:39 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:25.855 08:35:39 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:25.855 08:35:39 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:25.855 08:35:39 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:25.855 08:35:39 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.rZL27WvNn2 00:38:25.855 08:35:39 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.rZL27WvNn2 00:38:25.855 08:35:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.rZL27WvNn2 00:38:26.114 08:35:39 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.rZL27WvNn2 00:38:26.114 08:35:39 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:38:26.114 08:35:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:26.114 08:35:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:26.114 08:35:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:26.114 08:35:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:26.114 08:35:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:26.114 08:35:40 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:38:26.114 08:35:40 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:26.114 08:35:40 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:38:26.114 08:35:40 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:26.114 08:35:40 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:26.114 08:35:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:26.114 08:35:40 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:26.114 08:35:40 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:26.114 08:35:40 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:26.114 08:35:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:26.373 [2024-11-20 08:35:40.275447] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.rZL27WvNn2': No such file or directory 00:38:26.373 [2024-11-20 08:35:40.275468] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:38:26.373 [2024-11-20 08:35:40.275484] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:38:26.373 [2024-11-20 08:35:40.275492] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:38:26.373 [2024-11-20 08:35:40.275499] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:38:26.373 [2024-11-20 08:35:40.275506] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:38:26.373 request: 00:38:26.373 { 00:38:26.373 "name": "nvme0", 00:38:26.373 "trtype": "tcp", 00:38:26.373 "traddr": "127.0.0.1", 00:38:26.373 "adrfam": "ipv4", 00:38:26.373 "trsvcid": "4420", 00:38:26.373 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:26.373 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:26.373 "prchk_reftag": false, 00:38:26.373 "prchk_guard": false, 00:38:26.373 "hdgst": false, 00:38:26.373 "ddgst": false, 00:38:26.373 "psk": "key0", 00:38:26.373 "allow_unrecognized_csi": false, 00:38:26.373 "method": "bdev_nvme_attach_controller", 00:38:26.373 "req_id": 1 00:38:26.373 } 00:38:26.373 Got JSON-RPC error response 00:38:26.373 response: 00:38:26.373 { 00:38:26.373 "code": -19, 00:38:26.373 "message": "No such device" 00:38:26.373 } 00:38:26.373 08:35:40 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:38:26.373 08:35:40 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:26.373 08:35:40 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:26.373 08:35:40 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:26.373 08:35:40 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:38:26.373 08:35:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:26.633 08:35:40 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:38:26.633 08:35:40 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:38:26.633 08:35:40 keyring_file -- keyring/common.sh@17 -- # name=key0 00:38:26.633 08:35:40 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:26.633 08:35:40 keyring_file -- keyring/common.sh@17 -- # digest=0 00:38:26.633 08:35:40 keyring_file -- keyring/common.sh@18 -- # mktemp 00:38:26.633 08:35:40 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.po5Wo97BP0 00:38:26.633 08:35:40 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:26.633 08:35:40 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:26.633 08:35:40 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:38:26.633 08:35:40 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:38:26.633 08:35:40 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:38:26.633 08:35:40 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:38:26.633 08:35:40 keyring_file -- nvmf/common.sh@507 -- # python - 00:38:26.633 08:35:40 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.po5Wo97BP0 00:38:26.633 08:35:40 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.po5Wo97BP0 00:38:26.633 08:35:40 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.po5Wo97BP0 00:38:26.633 08:35:40 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.po5Wo97BP0 00:38:26.633 08:35:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.po5Wo97BP0 00:38:26.892 08:35:40 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:26.892 08:35:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:27.151 nvme0n1 00:38:27.151 08:35:41 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:38:27.151 08:35:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:27.151 08:35:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:27.151 08:35:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:27.151 08:35:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:27.151 08:35:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:27.410 08:35:41 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:38:27.410 08:35:41 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:38:27.410 08:35:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:38:27.410 08:35:41 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:38:27.410 08:35:41 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:38:27.410 08:35:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:27.410 08:35:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:27.410 08:35:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:27.669 08:35:41 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:38:27.669 08:35:41 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:38:27.669 08:35:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:27.669 08:35:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:27.669 08:35:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:27.669 08:35:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:27.669 08:35:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:27.928 08:35:41 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:38:27.928 08:35:41 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:27.928 08:35:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:28.187 08:35:41 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:38:28.187 08:35:41 keyring_file -- keyring/file.sh@105 -- # jq length 00:38:28.187 08:35:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:28.187 08:35:42 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:38:28.187 08:35:42 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.po5Wo97BP0 00:38:28.187 08:35:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.po5Wo97BP0 00:38:28.445 08:35:42 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.SWSx3J7NUW 00:38:28.446 08:35:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.SWSx3J7NUW 00:38:28.705 08:35:42 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:28.705 08:35:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:38:28.964 nvme0n1 00:38:28.964 08:35:42 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:38:28.964 08:35:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:38:29.223 08:35:43 keyring_file -- keyring/file.sh@113 -- # config='{ 00:38:29.223 "subsystems": [ 00:38:29.223 { 00:38:29.223 "subsystem": "keyring", 00:38:29.223 "config": [ 00:38:29.223 { 00:38:29.223 "method": "keyring_file_add_key", 00:38:29.223 "params": { 00:38:29.223 "name": "key0", 00:38:29.223 "path": "/tmp/tmp.po5Wo97BP0" 00:38:29.223 } 00:38:29.223 }, 00:38:29.223 { 00:38:29.223 "method": "keyring_file_add_key", 00:38:29.223 "params": { 00:38:29.223 "name": "key1", 00:38:29.223 "path": "/tmp/tmp.SWSx3J7NUW" 00:38:29.223 } 00:38:29.223 } 00:38:29.223 ] 00:38:29.223 }, 00:38:29.223 { 00:38:29.223 "subsystem": "iobuf", 00:38:29.223 "config": [ 00:38:29.223 { 00:38:29.223 "method": "iobuf_set_options", 00:38:29.223 "params": { 00:38:29.223 "small_pool_count": 8192, 00:38:29.223 "large_pool_count": 1024, 00:38:29.223 "small_bufsize": 8192, 00:38:29.223 "large_bufsize": 135168, 00:38:29.223 "enable_numa": false 00:38:29.223 } 00:38:29.223 } 00:38:29.223 ] 00:38:29.223 }, 00:38:29.223 { 00:38:29.223 "subsystem": "sock", 00:38:29.223 "config": [ 00:38:29.223 { 00:38:29.223 "method": "sock_set_default_impl", 00:38:29.223 "params": { 00:38:29.223 "impl_name": "posix" 00:38:29.223 } 00:38:29.223 }, 00:38:29.223 { 00:38:29.223 "method": "sock_impl_set_options", 00:38:29.223 "params": { 00:38:29.223 "impl_name": "ssl", 00:38:29.223 "recv_buf_size": 4096, 00:38:29.223 "send_buf_size": 4096, 00:38:29.223 "enable_recv_pipe": true, 00:38:29.223 "enable_quickack": false, 00:38:29.223 "enable_placement_id": 0, 00:38:29.223 "enable_zerocopy_send_server": true, 00:38:29.223 "enable_zerocopy_send_client": false, 00:38:29.223 "zerocopy_threshold": 0, 00:38:29.223 "tls_version": 0, 00:38:29.223 "enable_ktls": false 00:38:29.223 } 00:38:29.223 }, 00:38:29.223 { 00:38:29.223 "method": "sock_impl_set_options", 00:38:29.223 "params": { 00:38:29.223 "impl_name": "posix", 00:38:29.223 "recv_buf_size": 2097152, 00:38:29.223 "send_buf_size": 2097152, 00:38:29.223 "enable_recv_pipe": true, 00:38:29.223 "enable_quickack": false, 00:38:29.223 "enable_placement_id": 0, 00:38:29.223 "enable_zerocopy_send_server": true, 00:38:29.223 "enable_zerocopy_send_client": false, 00:38:29.223 "zerocopy_threshold": 0, 00:38:29.223 "tls_version": 0, 00:38:29.223 "enable_ktls": false 00:38:29.223 } 00:38:29.223 } 00:38:29.223 ] 00:38:29.223 }, 00:38:29.223 { 00:38:29.223 "subsystem": "vmd", 00:38:29.223 "config": [] 00:38:29.223 }, 00:38:29.223 { 00:38:29.223 "subsystem": "accel", 00:38:29.223 "config": [ 00:38:29.223 { 00:38:29.223 "method": "accel_set_options", 00:38:29.223 "params": { 00:38:29.223 "small_cache_size": 128, 00:38:29.223 "large_cache_size": 16, 00:38:29.223 "task_count": 2048, 00:38:29.223 "sequence_count": 2048, 00:38:29.223 "buf_count": 2048 00:38:29.223 } 00:38:29.223 } 00:38:29.223 ] 00:38:29.223 }, 00:38:29.223 { 00:38:29.223 "subsystem": "bdev", 00:38:29.223 "config": [ 00:38:29.223 { 00:38:29.223 "method": "bdev_set_options", 00:38:29.223 "params": { 00:38:29.223 "bdev_io_pool_size": 65535, 00:38:29.223 "bdev_io_cache_size": 256, 00:38:29.223 "bdev_auto_examine": true, 00:38:29.223 "iobuf_small_cache_size": 128, 00:38:29.223 "iobuf_large_cache_size": 16 00:38:29.223 } 00:38:29.223 }, 00:38:29.223 { 00:38:29.223 "method": "bdev_raid_set_options", 00:38:29.223 "params": { 00:38:29.223 "process_window_size_kb": 1024, 00:38:29.223 "process_max_bandwidth_mb_sec": 0 00:38:29.223 } 00:38:29.223 }, 00:38:29.223 { 00:38:29.223 "method": "bdev_iscsi_set_options", 00:38:29.223 "params": { 00:38:29.223 "timeout_sec": 30 00:38:29.223 } 00:38:29.223 }, 00:38:29.223 { 00:38:29.223 "method": "bdev_nvme_set_options", 00:38:29.223 "params": { 00:38:29.223 "action_on_timeout": "none", 00:38:29.223 "timeout_us": 0, 00:38:29.223 "timeout_admin_us": 0, 00:38:29.223 "keep_alive_timeout_ms": 10000, 00:38:29.223 "arbitration_burst": 0, 00:38:29.223 "low_priority_weight": 0, 00:38:29.223 "medium_priority_weight": 0, 00:38:29.223 "high_priority_weight": 0, 00:38:29.223 "nvme_adminq_poll_period_us": 10000, 00:38:29.223 "nvme_ioq_poll_period_us": 0, 00:38:29.223 "io_queue_requests": 512, 00:38:29.223 "delay_cmd_submit": true, 00:38:29.223 "transport_retry_count": 4, 00:38:29.223 "bdev_retry_count": 3, 00:38:29.223 "transport_ack_timeout": 0, 00:38:29.223 "ctrlr_loss_timeout_sec": 0, 00:38:29.223 "reconnect_delay_sec": 0, 00:38:29.223 "fast_io_fail_timeout_sec": 0, 00:38:29.223 "disable_auto_failback": false, 00:38:29.223 "generate_uuids": false, 00:38:29.223 "transport_tos": 0, 00:38:29.223 "nvme_error_stat": false, 00:38:29.223 "rdma_srq_size": 0, 00:38:29.223 "io_path_stat": false, 00:38:29.223 "allow_accel_sequence": false, 00:38:29.223 "rdma_max_cq_size": 0, 00:38:29.223 "rdma_cm_event_timeout_ms": 0, 00:38:29.223 "dhchap_digests": [ 00:38:29.223 "sha256", 00:38:29.223 "sha384", 00:38:29.223 "sha512" 00:38:29.223 ], 00:38:29.223 "dhchap_dhgroups": [ 00:38:29.223 "null", 00:38:29.223 "ffdhe2048", 00:38:29.223 "ffdhe3072", 00:38:29.223 "ffdhe4096", 00:38:29.223 "ffdhe6144", 00:38:29.223 "ffdhe8192" 00:38:29.223 ] 00:38:29.223 } 00:38:29.223 }, 00:38:29.223 { 00:38:29.223 "method": "bdev_nvme_attach_controller", 00:38:29.223 "params": { 00:38:29.223 "name": "nvme0", 00:38:29.223 "trtype": "TCP", 00:38:29.223 "adrfam": "IPv4", 00:38:29.223 "traddr": "127.0.0.1", 00:38:29.223 "trsvcid": "4420", 00:38:29.223 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:29.223 "prchk_reftag": false, 00:38:29.223 "prchk_guard": false, 00:38:29.223 "ctrlr_loss_timeout_sec": 0, 00:38:29.223 "reconnect_delay_sec": 0, 00:38:29.223 "fast_io_fail_timeout_sec": 0, 00:38:29.223 "psk": "key0", 00:38:29.223 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:29.223 "hdgst": false, 00:38:29.223 "ddgst": false, 00:38:29.223 "multipath": "multipath" 00:38:29.223 } 00:38:29.223 }, 00:38:29.223 { 00:38:29.223 "method": "bdev_nvme_set_hotplug", 00:38:29.223 "params": { 00:38:29.223 "period_us": 100000, 00:38:29.223 "enable": false 00:38:29.223 } 00:38:29.223 }, 00:38:29.223 { 00:38:29.223 "method": "bdev_wait_for_examine" 00:38:29.223 } 00:38:29.223 ] 00:38:29.223 }, 00:38:29.223 { 00:38:29.223 "subsystem": "nbd", 00:38:29.223 "config": [] 00:38:29.223 } 00:38:29.223 ] 00:38:29.223 }' 00:38:29.223 08:35:43 keyring_file -- keyring/file.sh@115 -- # killprocess 1981929 00:38:29.223 08:35:43 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1981929 ']' 00:38:29.223 08:35:43 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1981929 00:38:29.223 08:35:43 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:29.223 08:35:43 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:29.223 08:35:43 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1981929 00:38:29.223 08:35:43 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:29.223 08:35:43 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:29.223 08:35:43 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1981929' 00:38:29.223 killing process with pid 1981929 00:38:29.224 08:35:43 keyring_file -- common/autotest_common.sh@973 -- # kill 1981929 00:38:29.224 Received shutdown signal, test time was about 1.000000 seconds 00:38:29.224 00:38:29.224 Latency(us) 00:38:29.224 [2024-11-20T07:35:43.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:29.224 [2024-11-20T07:35:43.252Z] =================================================================================================================== 00:38:29.224 [2024-11-20T07:35:43.252Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:29.224 08:35:43 keyring_file -- common/autotest_common.sh@978 -- # wait 1981929 00:38:29.483 08:35:43 keyring_file -- keyring/file.sh@118 -- # bperfpid=1983446 00:38:29.483 08:35:43 keyring_file -- keyring/file.sh@120 -- # waitforlisten 1983446 /var/tmp/bperf.sock 00:38:29.483 08:35:43 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 1983446 ']' 00:38:29.483 08:35:43 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:29.483 08:35:43 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:38:29.483 08:35:43 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:29.483 08:35:43 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:29.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:29.483 08:35:43 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:38:29.483 "subsystems": [ 00:38:29.483 { 00:38:29.483 "subsystem": "keyring", 00:38:29.483 "config": [ 00:38:29.483 { 00:38:29.483 "method": "keyring_file_add_key", 00:38:29.483 "params": { 00:38:29.483 "name": "key0", 00:38:29.483 "path": "/tmp/tmp.po5Wo97BP0" 00:38:29.483 } 00:38:29.483 }, 00:38:29.483 { 00:38:29.483 "method": "keyring_file_add_key", 00:38:29.483 "params": { 00:38:29.483 "name": "key1", 00:38:29.483 "path": "/tmp/tmp.SWSx3J7NUW" 00:38:29.483 } 00:38:29.483 } 00:38:29.483 ] 00:38:29.483 }, 00:38:29.483 { 00:38:29.483 "subsystem": "iobuf", 00:38:29.483 "config": [ 00:38:29.483 { 00:38:29.483 "method": "iobuf_set_options", 00:38:29.483 "params": { 00:38:29.483 "small_pool_count": 8192, 00:38:29.483 "large_pool_count": 1024, 00:38:29.483 "small_bufsize": 8192, 00:38:29.483 "large_bufsize": 135168, 00:38:29.483 "enable_numa": false 00:38:29.483 } 00:38:29.483 } 00:38:29.483 ] 00:38:29.483 }, 00:38:29.483 { 00:38:29.483 "subsystem": "sock", 00:38:29.483 "config": [ 00:38:29.483 { 00:38:29.483 "method": "sock_set_default_impl", 00:38:29.483 "params": { 00:38:29.483 "impl_name": "posix" 00:38:29.483 } 00:38:29.483 }, 00:38:29.483 { 00:38:29.483 "method": "sock_impl_set_options", 00:38:29.483 "params": { 00:38:29.483 "impl_name": "ssl", 00:38:29.483 "recv_buf_size": 4096, 00:38:29.483 "send_buf_size": 4096, 00:38:29.483 "enable_recv_pipe": true, 00:38:29.483 "enable_quickack": false, 00:38:29.483 "enable_placement_id": 0, 00:38:29.483 "enable_zerocopy_send_server": true, 00:38:29.483 "enable_zerocopy_send_client": false, 00:38:29.483 "zerocopy_threshold": 0, 00:38:29.483 "tls_version": 0, 00:38:29.483 "enable_ktls": false 00:38:29.483 } 00:38:29.483 }, 00:38:29.483 { 00:38:29.483 "method": "sock_impl_set_options", 00:38:29.483 "params": { 00:38:29.483 "impl_name": "posix", 00:38:29.483 "recv_buf_size": 2097152, 00:38:29.483 "send_buf_size": 2097152, 00:38:29.483 "enable_recv_pipe": true, 00:38:29.483 "enable_quickack": false, 00:38:29.483 "enable_placement_id": 0, 00:38:29.483 "enable_zerocopy_send_server": true, 00:38:29.483 "enable_zerocopy_send_client": false, 00:38:29.483 "zerocopy_threshold": 0, 00:38:29.483 "tls_version": 0, 00:38:29.483 "enable_ktls": false 00:38:29.483 } 00:38:29.483 } 00:38:29.483 ] 00:38:29.483 }, 00:38:29.483 { 00:38:29.483 "subsystem": "vmd", 00:38:29.483 "config": [] 00:38:29.483 }, 00:38:29.483 { 00:38:29.483 "subsystem": "accel", 00:38:29.483 "config": [ 00:38:29.483 { 00:38:29.483 "method": "accel_set_options", 00:38:29.483 "params": { 00:38:29.483 "small_cache_size": 128, 00:38:29.483 "large_cache_size": 16, 00:38:29.483 "task_count": 2048, 00:38:29.483 "sequence_count": 2048, 00:38:29.483 "buf_count": 2048 00:38:29.483 } 00:38:29.483 } 00:38:29.483 ] 00:38:29.483 }, 00:38:29.483 { 00:38:29.483 "subsystem": "bdev", 00:38:29.483 "config": [ 00:38:29.483 { 00:38:29.483 "method": "bdev_set_options", 00:38:29.483 "params": { 00:38:29.483 "bdev_io_pool_size": 65535, 00:38:29.483 "bdev_io_cache_size": 256, 00:38:29.483 "bdev_auto_examine": true, 00:38:29.483 "iobuf_small_cache_size": 128, 00:38:29.483 "iobuf_large_cache_size": 16 00:38:29.483 } 00:38:29.483 }, 00:38:29.483 { 00:38:29.483 "method": "bdev_raid_set_options", 00:38:29.483 "params": { 00:38:29.483 "process_window_size_kb": 1024, 00:38:29.483 "process_max_bandwidth_mb_sec": 0 00:38:29.483 } 00:38:29.483 }, 00:38:29.483 { 00:38:29.483 "method": "bdev_iscsi_set_options", 00:38:29.483 "params": { 00:38:29.483 "timeout_sec": 30 00:38:29.483 } 00:38:29.483 }, 00:38:29.483 { 00:38:29.483 "method": "bdev_nvme_set_options", 00:38:29.483 "params": { 00:38:29.483 "action_on_timeout": "none", 00:38:29.483 "timeout_us": 0, 00:38:29.483 "timeout_admin_us": 0, 00:38:29.483 "keep_alive_timeout_ms": 10000, 00:38:29.483 "arbitration_burst": 0, 00:38:29.484 "low_priority_weight": 0, 00:38:29.484 "medium_priority_weight": 0, 00:38:29.484 "high_priority_weight": 0, 00:38:29.484 "nvme_adminq_poll_period_us": 10000, 00:38:29.484 "nvme_ioq_poll_period_us": 0, 00:38:29.484 "io_queue_requests": 512, 00:38:29.484 "delay_cmd_submit": true, 00:38:29.484 "transport_retry_count": 4, 00:38:29.484 "bdev_retry_count": 3, 00:38:29.484 "transport_ack_timeout": 0, 00:38:29.484 "ctrlr_loss_timeout_sec": 0, 00:38:29.484 "reconnect_delay_sec": 0, 00:38:29.484 "fast_io_fail_timeout_sec": 0, 00:38:29.484 "disable_auto_failback": false, 00:38:29.484 "generate_uuids": false, 00:38:29.484 "transport_tos": 0, 00:38:29.484 "nvme_error_stat": false, 00:38:29.484 "rdma_srq_size": 0, 00:38:29.484 "io_path_stat": false, 00:38:29.484 "allow_accel_sequence": false, 00:38:29.484 "rdma_max_cq_size": 0, 00:38:29.484 "rdma_cm_event_timeout_ms": 0, 00:38:29.484 "dhchap_digests": [ 00:38:29.484 "sha256", 00:38:29.484 "sha384", 00:38:29.484 "sha512" 00:38:29.484 ], 00:38:29.484 "dhchap_dhgroups": [ 00:38:29.484 "null", 00:38:29.484 "ffdhe2048", 00:38:29.484 "ffdhe3072", 00:38:29.484 "ffdhe4096", 00:38:29.484 "ffdhe6144", 00:38:29.484 "ffdhe8192" 00:38:29.484 ] 00:38:29.484 } 00:38:29.484 }, 00:38:29.484 { 00:38:29.484 "method": "bdev_nvme_attach_controller", 00:38:29.484 "params": { 00:38:29.484 "name": "nvme0", 00:38:29.484 "trtype": "TCP", 00:38:29.484 "adrfam": "IPv4", 00:38:29.484 "traddr": "127.0.0.1", 00:38:29.484 "trsvcid": "4420", 00:38:29.484 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:29.484 "prchk_reftag": false, 00:38:29.484 "prchk_guard": false, 00:38:29.484 "ctrlr_loss_timeout_sec": 0, 00:38:29.484 "reconnect_delay_sec": 0, 00:38:29.484 "fast_io_fail_timeout_sec": 0, 00:38:29.484 "psk": "key0", 00:38:29.484 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:29.484 "hdgst": false, 00:38:29.484 "ddgst": false, 00:38:29.484 "multipath": "multipath" 00:38:29.484 } 00:38:29.484 }, 00:38:29.484 { 00:38:29.484 "method": "bdev_nvme_set_hotplug", 00:38:29.484 "params": { 00:38:29.484 "period_us": 100000, 00:38:29.484 "enable": false 00:38:29.484 } 00:38:29.484 }, 00:38:29.484 { 00:38:29.484 "method": "bdev_wait_for_examine" 00:38:29.484 } 00:38:29.484 ] 00:38:29.484 }, 00:38:29.484 { 00:38:29.484 "subsystem": "nbd", 00:38:29.484 "config": [] 00:38:29.484 } 00:38:29.484 ] 00:38:29.484 }' 00:38:29.484 08:35:43 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:29.484 08:35:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:29.484 [2024-11-20 08:35:43.328626] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:38:29.484 [2024-11-20 08:35:43.328675] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1983446 ] 00:38:29.484 [2024-11-20 08:35:43.400207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:29.484 [2024-11-20 08:35:43.437109] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:29.763 [2024-11-20 08:35:43.598322] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:30.384 08:35:44 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:30.384 08:35:44 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:38:30.384 08:35:44 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:38:30.384 08:35:44 keyring_file -- keyring/file.sh@121 -- # jq length 00:38:30.384 08:35:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:30.384 08:35:44 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:38:30.384 08:35:44 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:38:30.384 08:35:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:38:30.384 08:35:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:30.384 08:35:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:30.384 08:35:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:38:30.384 08:35:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:30.642 08:35:44 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:38:30.642 08:35:44 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:38:30.642 08:35:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:38:30.642 08:35:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:38:30.642 08:35:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:30.642 08:35:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:38:30.642 08:35:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:30.901 08:35:44 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:38:30.901 08:35:44 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:38:30.901 08:35:44 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:38:30.901 08:35:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:38:31.160 08:35:44 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:38:31.160 08:35:44 keyring_file -- keyring/file.sh@1 -- # cleanup 00:38:31.160 08:35:44 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.po5Wo97BP0 /tmp/tmp.SWSx3J7NUW 00:38:31.160 08:35:44 keyring_file -- keyring/file.sh@20 -- # killprocess 1983446 00:38:31.160 08:35:44 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1983446 ']' 00:38:31.160 08:35:44 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1983446 00:38:31.160 08:35:44 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:31.160 08:35:44 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:31.160 08:35:44 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1983446 00:38:31.160 08:35:45 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:31.160 08:35:45 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:31.160 08:35:45 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1983446' 00:38:31.160 killing process with pid 1983446 00:38:31.160 08:35:45 keyring_file -- common/autotest_common.sh@973 -- # kill 1983446 00:38:31.160 Received shutdown signal, test time was about 1.000000 seconds 00:38:31.160 00:38:31.160 Latency(us) 00:38:31.160 [2024-11-20T07:35:45.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:31.160 [2024-11-20T07:35:45.188Z] =================================================================================================================== 00:38:31.160 [2024-11-20T07:35:45.188Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:38:31.160 08:35:45 keyring_file -- common/autotest_common.sh@978 -- # wait 1983446 00:38:31.160 08:35:45 keyring_file -- keyring/file.sh@21 -- # killprocess 1981923 00:38:31.160 08:35:45 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 1981923 ']' 00:38:31.160 08:35:45 keyring_file -- common/autotest_common.sh@958 -- # kill -0 1981923 00:38:31.160 08:35:45 keyring_file -- common/autotest_common.sh@959 -- # uname 00:38:31.160 08:35:45 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:31.160 08:35:45 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1981923 00:38:31.419 08:35:45 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:31.419 08:35:45 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:31.419 08:35:45 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1981923' 00:38:31.419 killing process with pid 1981923 00:38:31.419 08:35:45 keyring_file -- common/autotest_common.sh@973 -- # kill 1981923 00:38:31.419 08:35:45 keyring_file -- common/autotest_common.sh@978 -- # wait 1981923 00:38:31.678 00:38:31.678 real 0m11.729s 00:38:31.678 user 0m29.159s 00:38:31.678 sys 0m2.713s 00:38:31.678 08:35:45 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:31.678 08:35:45 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:38:31.678 ************************************ 00:38:31.678 END TEST keyring_file 00:38:31.678 ************************************ 00:38:31.678 08:35:45 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:38:31.679 08:35:45 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:31.679 08:35:45 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:38:31.679 08:35:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:31.679 08:35:45 -- common/autotest_common.sh@10 -- # set +x 00:38:31.679 ************************************ 00:38:31.679 START TEST keyring_linux 00:38:31.679 ************************************ 00:38:31.679 08:35:45 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:38:31.679 Joined session keyring: 555066150 00:38:31.679 * Looking for test storage... 00:38:31.679 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:38:31.679 08:35:45 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:38:31.679 08:35:45 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:38:31.679 08:35:45 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:38:31.939 08:35:45 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@345 -- # : 1 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@368 -- # return 0 00:38:31.939 08:35:45 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:31.939 08:35:45 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:38:31.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:31.939 --rc genhtml_branch_coverage=1 00:38:31.939 --rc genhtml_function_coverage=1 00:38:31.939 --rc genhtml_legend=1 00:38:31.939 --rc geninfo_all_blocks=1 00:38:31.939 --rc geninfo_unexecuted_blocks=1 00:38:31.939 00:38:31.939 ' 00:38:31.939 08:35:45 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:38:31.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:31.939 --rc genhtml_branch_coverage=1 00:38:31.939 --rc genhtml_function_coverage=1 00:38:31.939 --rc genhtml_legend=1 00:38:31.939 --rc geninfo_all_blocks=1 00:38:31.939 --rc geninfo_unexecuted_blocks=1 00:38:31.939 00:38:31.939 ' 00:38:31.939 08:35:45 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:38:31.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:31.939 --rc genhtml_branch_coverage=1 00:38:31.939 --rc genhtml_function_coverage=1 00:38:31.939 --rc genhtml_legend=1 00:38:31.939 --rc geninfo_all_blocks=1 00:38:31.939 --rc geninfo_unexecuted_blocks=1 00:38:31.939 00:38:31.939 ' 00:38:31.939 08:35:45 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:38:31.939 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:31.939 --rc genhtml_branch_coverage=1 00:38:31.939 --rc genhtml_function_coverage=1 00:38:31.939 --rc genhtml_legend=1 00:38:31.939 --rc geninfo_all_blocks=1 00:38:31.939 --rc geninfo_unexecuted_blocks=1 00:38:31.939 00:38:31.939 ' 00:38:31.939 08:35:45 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:38:31.939 08:35:45 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:31.939 08:35:45 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:31.939 08:35:45 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:31.939 08:35:45 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:31.939 08:35:45 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:31.939 08:35:45 keyring_linux -- paths/export.sh@5 -- # export PATH 00:38:31.939 08:35:45 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:38:31.939 08:35:45 keyring_linux -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:38:31.939 08:35:45 keyring_linux -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:38:31.939 08:35:45 keyring_linux -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@50 -- # : 0 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:38:31.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@54 -- # have_pci_nics=0 00:38:31.939 08:35:45 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:38:31.939 08:35:45 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:38:31.939 08:35:45 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:38:31.939 08:35:45 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:38:31.939 08:35:45 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:38:31.939 08:35:45 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:38:31.939 08:35:45 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:38:31.939 08:35:45 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:31.939 08:35:45 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:38:31.939 08:35:45 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:38:31.939 08:35:45 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:31.939 08:35:45 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:38:31.939 08:35:45 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:38:31.939 08:35:45 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:38:31.940 08:35:45 keyring_linux -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:38:31.940 08:35:45 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:38:31.940 08:35:45 keyring_linux -- nvmf/common.sh@507 -- # python - 00:38:31.940 08:35:45 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:38:31.940 08:35:45 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:38:31.940 /tmp/:spdk-test:key0 00:38:31.940 08:35:45 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:38:31.940 08:35:45 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:38:31.940 08:35:45 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:38:31.940 08:35:45 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:38:31.940 08:35:45 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:38:31.940 08:35:45 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:38:31.940 08:35:45 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:38:31.940 08:35:45 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:38:31.940 08:35:45 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:38:31.940 08:35:45 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:38:31.940 08:35:45 keyring_linux -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:38:31.940 08:35:45 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:38:31.940 08:35:45 keyring_linux -- nvmf/common.sh@507 -- # python - 00:38:31.940 08:35:45 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:38:31.940 08:35:45 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:38:31.940 /tmp/:spdk-test:key1 00:38:31.940 08:35:45 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1983999 00:38:31.940 08:35:45 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:38:31.940 08:35:45 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1983999 00:38:31.940 08:35:45 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1983999 ']' 00:38:31.940 08:35:45 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:31.940 08:35:45 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:31.940 08:35:45 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:31.940 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:31.940 08:35:45 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:31.940 08:35:45 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:31.940 [2024-11-20 08:35:45.949159] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:38:31.940 [2024-11-20 08:35:45.949213] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1983999 ] 00:38:32.199 [2024-11-20 08:35:46.024638] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.199 [2024-11-20 08:35:46.066377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:38:32.458 08:35:46 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:32.458 08:35:46 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:32.458 08:35:46 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:38:32.458 08:35:46 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:32.458 08:35:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:32.458 [2024-11-20 08:35:46.275016] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:32.458 null0 00:38:32.458 [2024-11-20 08:35:46.307076] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:38:32.458 [2024-11-20 08:35:46.307443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:38:32.458 08:35:46 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:32.458 08:35:46 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:38:32.458 853507971 00:38:32.458 08:35:46 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:38:32.458 896645362 00:38:32.458 08:35:46 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1984014 00:38:32.458 08:35:46 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1984014 /var/tmp/bperf.sock 00:38:32.458 08:35:46 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:38:32.458 08:35:46 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 1984014 ']' 00:38:32.458 08:35:46 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:38:32.458 08:35:46 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:32.458 08:35:46 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:38:32.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:38:32.458 08:35:46 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:32.458 08:35:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:32.458 [2024-11-20 08:35:46.377277] Starting SPDK v25.01-pre git sha1 6f7b42a3a / DPDK 24.03.0 initialization... 00:38:32.458 [2024-11-20 08:35:46.377320] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1984014 ] 00:38:32.458 [2024-11-20 08:35:46.451355] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:32.718 [2024-11-20 08:35:46.493445] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:38:32.718 08:35:46 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:32.718 08:35:46 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:38:32.718 08:35:46 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:38:32.718 08:35:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:38:32.718 08:35:46 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:38:32.718 08:35:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:38:32.977 08:35:46 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:32.977 08:35:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:38:33.236 [2024-11-20 08:35:47.128372] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:38:33.236 nvme0n1 00:38:33.236 08:35:47 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:38:33.236 08:35:47 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:38:33.236 08:35:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:33.236 08:35:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:33.236 08:35:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:33.236 08:35:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:33.495 08:35:47 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:38:33.495 08:35:47 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:33.495 08:35:47 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:38:33.495 08:35:47 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:38:33.495 08:35:47 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:38:33.495 08:35:47 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:38:33.495 08:35:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:33.754 08:35:47 keyring_linux -- keyring/linux.sh@25 -- # sn=853507971 00:38:33.754 08:35:47 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:38:33.754 08:35:47 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:33.754 08:35:47 keyring_linux -- keyring/linux.sh@26 -- # [[ 853507971 == \8\5\3\5\0\7\9\7\1 ]] 00:38:33.754 08:35:47 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 853507971 00:38:33.754 08:35:47 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:38:33.754 08:35:47 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:38:33.754 Running I/O for 1 seconds... 00:38:34.950 21851.00 IOPS, 85.36 MiB/s 00:38:34.950 Latency(us) 00:38:34.950 [2024-11-20T07:35:48.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:34.950 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:38:34.950 nvme0n1 : 1.01 21847.98 85.34 0.00 0.00 5839.32 1911.47 7115.34 00:38:34.950 [2024-11-20T07:35:48.978Z] =================================================================================================================== 00:38:34.950 [2024-11-20T07:35:48.979Z] Total : 21847.98 85.34 0.00 0.00 5839.32 1911.47 7115.34 00:38:34.951 { 00:38:34.951 "results": [ 00:38:34.951 { 00:38:34.951 "job": "nvme0n1", 00:38:34.951 "core_mask": "0x2", 00:38:34.951 "workload": "randread", 00:38:34.951 "status": "finished", 00:38:34.951 "queue_depth": 128, 00:38:34.951 "io_size": 4096, 00:38:34.951 "runtime": 1.005997, 00:38:34.951 "iops": 21847.977677865838, 00:38:34.951 "mibps": 85.34366280416343, 00:38:34.951 "io_failed": 0, 00:38:34.951 "io_timeout": 0, 00:38:34.951 "avg_latency_us": 5839.319627609905, 00:38:34.951 "min_latency_us": 1911.4666666666667, 00:38:34.951 "max_latency_us": 7115.337142857143 00:38:34.951 } 00:38:34.951 ], 00:38:34.951 "core_count": 1 00:38:34.951 } 00:38:34.951 08:35:48 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:38:34.951 08:35:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:38:34.951 08:35:48 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:38:34.951 08:35:48 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:38:34.951 08:35:48 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:38:34.951 08:35:48 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:38:34.951 08:35:48 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:38:34.951 08:35:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:38:35.210 08:35:49 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:38:35.210 08:35:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:38:35.210 08:35:49 keyring_linux -- keyring/linux.sh@23 -- # return 00:38:35.210 08:35:49 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:35.210 08:35:49 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:38:35.210 08:35:49 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:35.210 08:35:49 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:38:35.210 08:35:49 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:35.210 08:35:49 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:38:35.210 08:35:49 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:35.211 08:35:49 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:35.211 08:35:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:38:35.469 [2024-11-20 08:35:49.316757] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:38:35.469 [2024-11-20 08:35:49.317497] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x876a70 (107): Transport endpoint is not connected 00:38:35.469 [2024-11-20 08:35:49.318491] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x876a70 (9): Bad file descriptor 00:38:35.469 [2024-11-20 08:35:49.319492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:38:35.469 [2024-11-20 08:35:49.319501] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:38:35.469 [2024-11-20 08:35:49.319508] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:38:35.469 [2024-11-20 08:35:49.319517] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:38:35.469 request: 00:38:35.469 { 00:38:35.469 "name": "nvme0", 00:38:35.469 "trtype": "tcp", 00:38:35.469 "traddr": "127.0.0.1", 00:38:35.469 "adrfam": "ipv4", 00:38:35.469 "trsvcid": "4420", 00:38:35.469 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:35.469 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:35.469 "prchk_reftag": false, 00:38:35.469 "prchk_guard": false, 00:38:35.469 "hdgst": false, 00:38:35.469 "ddgst": false, 00:38:35.469 "psk": ":spdk-test:key1", 00:38:35.469 "allow_unrecognized_csi": false, 00:38:35.469 "method": "bdev_nvme_attach_controller", 00:38:35.469 "req_id": 1 00:38:35.469 } 00:38:35.469 Got JSON-RPC error response 00:38:35.469 response: 00:38:35.469 { 00:38:35.469 "code": -5, 00:38:35.469 "message": "Input/output error" 00:38:35.469 } 00:38:35.469 08:35:49 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:38:35.469 08:35:49 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:35.469 08:35:49 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:35.469 08:35:49 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:35.469 08:35:49 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:38:35.469 08:35:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:35.469 08:35:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:38:35.469 08:35:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:38:35.469 08:35:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:38:35.469 08:35:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:38:35.469 08:35:49 keyring_linux -- keyring/linux.sh@33 -- # sn=853507971 00:38:35.469 08:35:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 853507971 00:38:35.469 1 links removed 00:38:35.469 08:35:49 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:38:35.469 08:35:49 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:38:35.469 08:35:49 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:38:35.469 08:35:49 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:38:35.469 08:35:49 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:38:35.469 08:35:49 keyring_linux -- keyring/linux.sh@33 -- # sn=896645362 00:38:35.469 08:35:49 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 896645362 00:38:35.469 1 links removed 00:38:35.469 08:35:49 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1984014 00:38:35.469 08:35:49 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1984014 ']' 00:38:35.469 08:35:49 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1984014 00:38:35.469 08:35:49 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:35.469 08:35:49 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:35.469 08:35:49 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1984014 00:38:35.469 08:35:49 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:35.469 08:35:49 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:35.469 08:35:49 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1984014' 00:38:35.469 killing process with pid 1984014 00:38:35.469 08:35:49 keyring_linux -- common/autotest_common.sh@973 -- # kill 1984014 00:38:35.469 Received shutdown signal, test time was about 1.000000 seconds 00:38:35.469 00:38:35.469 Latency(us) 00:38:35.469 [2024-11-20T07:35:49.497Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:35.469 [2024-11-20T07:35:49.497Z] =================================================================================================================== 00:38:35.469 [2024-11-20T07:35:49.497Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:38:35.469 08:35:49 keyring_linux -- common/autotest_common.sh@978 -- # wait 1984014 00:38:35.729 08:35:49 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1983999 00:38:35.729 08:35:49 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 1983999 ']' 00:38:35.729 08:35:49 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 1983999 00:38:35.729 08:35:49 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:38:35.729 08:35:49 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:35.729 08:35:49 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 1983999 00:38:35.729 08:35:49 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:35.729 08:35:49 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:35.729 08:35:49 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 1983999' 00:38:35.729 killing process with pid 1983999 00:38:35.729 08:35:49 keyring_linux -- common/autotest_common.sh@973 -- # kill 1983999 00:38:35.729 08:35:49 keyring_linux -- common/autotest_common.sh@978 -- # wait 1983999 00:38:35.988 00:38:35.988 real 0m4.315s 00:38:35.988 user 0m8.186s 00:38:35.988 sys 0m1.381s 00:38:35.988 08:35:49 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:35.989 08:35:49 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:38:35.989 ************************************ 00:38:35.989 END TEST keyring_linux 00:38:35.989 ************************************ 00:38:35.989 08:35:49 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:38:35.989 08:35:49 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:38:35.989 08:35:49 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:38:35.989 08:35:49 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:38:35.989 08:35:49 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:38:35.989 08:35:49 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:38:35.989 08:35:49 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:38:35.989 08:35:49 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:38:35.989 08:35:49 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:38:35.989 08:35:49 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:38:35.989 08:35:49 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:38:35.989 08:35:49 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:38:35.989 08:35:49 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:38:35.989 08:35:49 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:38:35.989 08:35:49 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:38:35.989 08:35:49 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:38:35.989 08:35:49 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:38:35.989 08:35:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:35.989 08:35:49 -- common/autotest_common.sh@10 -- # set +x 00:38:35.989 08:35:49 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:38:35.989 08:35:49 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:38:35.989 08:35:49 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:38:35.989 08:35:49 -- common/autotest_common.sh@10 -- # set +x 00:38:41.263 INFO: APP EXITING 00:38:41.263 INFO: killing all VMs 00:38:41.263 INFO: killing vhost app 00:38:41.263 INFO: EXIT DONE 00:38:43.801 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:38:43.801 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:38:43.801 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:38:43.801 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:38:43.801 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:38:43.801 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:38:43.801 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:38:43.801 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:38:43.801 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:38:43.801 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:38:43.801 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:38:43.801 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:38:43.801 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:38:43.801 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:38:43.801 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:38:43.801 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:38:43.801 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:38:47.092 Cleaning 00:38:47.092 Removing: /var/run/dpdk/spdk0/config 00:38:47.092 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:38:47.092 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:38:47.092 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:38:47.092 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:38:47.092 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:38:47.092 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:38:47.092 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:38:47.092 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:38:47.092 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:38:47.093 Removing: /var/run/dpdk/spdk0/hugepage_info 00:38:47.093 Removing: /var/run/dpdk/spdk1/config 00:38:47.093 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:38:47.093 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:38:47.093 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:38:47.093 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:38:47.093 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:38:47.093 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:38:47.093 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:38:47.093 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:38:47.093 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:38:47.093 Removing: /var/run/dpdk/spdk1/hugepage_info 00:38:47.093 Removing: /var/run/dpdk/spdk2/config 00:38:47.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:38:47.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:38:47.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:38:47.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:38:47.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:38:47.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:38:47.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:38:47.093 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:38:47.093 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:38:47.093 Removing: /var/run/dpdk/spdk2/hugepage_info 00:38:47.093 Removing: /var/run/dpdk/spdk3/config 00:38:47.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:38:47.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:38:47.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:38:47.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:38:47.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:38:47.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:38:47.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:38:47.093 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:38:47.093 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:38:47.093 Removing: /var/run/dpdk/spdk3/hugepage_info 00:38:47.093 Removing: /var/run/dpdk/spdk4/config 00:38:47.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:38:47.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:38:47.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:38:47.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:38:47.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:38:47.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:38:47.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:38:47.093 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:38:47.093 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:38:47.093 Removing: /var/run/dpdk/spdk4/hugepage_info 00:38:47.093 Removing: /dev/shm/bdev_svc_trace.1 00:38:47.093 Removing: /dev/shm/nvmf_trace.0 00:38:47.093 Removing: /dev/shm/spdk_tgt_trace.pid1500512 00:38:47.093 Removing: /var/run/dpdk/spdk0 00:38:47.093 Removing: /var/run/dpdk/spdk1 00:38:47.093 Removing: /var/run/dpdk/spdk2 00:38:47.093 Removing: /var/run/dpdk/spdk3 00:38:47.093 Removing: /var/run/dpdk/spdk4 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1498128 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1499183 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1500512 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1501075 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1501981 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1502111 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1503083 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1503194 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1503453 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1505191 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1506465 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1506754 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1507041 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1507351 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1507641 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1507890 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1508142 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1508431 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1509142 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1512170 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1512344 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1512471 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1512667 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1512971 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1513165 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1513466 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1513657 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1513946 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1513960 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1514215 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1514228 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1514787 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1515040 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1515335 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1519096 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1523591 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1534382 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1534854 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1539382 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1539640 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1544153 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1550065 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1552669 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1563156 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1572259 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1573958 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1574896 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1592571 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1596667 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1643638 00:38:47.093 Removing: /var/run/dpdk/spdk_pid1648951 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1654953 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1661443 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1661490 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1662253 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1663103 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1664016 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1664498 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1664696 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1664932 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1664951 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1664953 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1665865 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1666778 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1667706 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1668174 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1668177 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1668448 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1669623 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1670628 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1679254 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1707673 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1712787 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1714535 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1716249 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1716397 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1716631 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1716645 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1717152 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1719006 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1719994 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1720488 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1722592 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1723082 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1723585 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1727889 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1733519 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1733520 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1733521 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1737326 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1745902 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1749759 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1756068 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1757740 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1759177 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1760690 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1765245 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1769611 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1773648 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1781265 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1781285 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1786028 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1786242 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1786485 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1786734 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1786892 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1791544 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1792047 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1796618 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1799172 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1804588 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1810450 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1819257 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1826450 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1826507 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1845606 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1846079 00:38:47.352 Removing: /var/run/dpdk/spdk_pid1846771 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1847242 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1847980 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1848455 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1848928 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1849616 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1854188 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1854428 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1860534 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1860726 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1866075 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1870331 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1880306 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1880788 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1885109 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1885542 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1889816 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1895479 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1898189 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1908780 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1917489 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1919286 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1920136 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1936200 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1940254 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1942941 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1951664 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1951671 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1956822 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1958713 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1960674 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1961727 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1963917 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1964975 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1973747 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1974209 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1974667 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1977160 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1977625 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1978093 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1981923 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1981929 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1983446 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1983999 00:38:47.612 Removing: /var/run/dpdk/spdk_pid1984014 00:38:47.612 Clean 00:38:47.612 08:36:01 -- common/autotest_common.sh@1453 -- # return 0 00:38:47.612 08:36:01 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:38:47.612 08:36:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:47.612 08:36:01 -- common/autotest_common.sh@10 -- # set +x 00:38:47.873 08:36:01 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:38:47.873 08:36:01 -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:47.873 08:36:01 -- common/autotest_common.sh@10 -- # set +x 00:38:47.873 08:36:01 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:47.873 08:36:01 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:38:47.873 08:36:01 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:38:47.873 08:36:01 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:38:47.873 08:36:01 -- spdk/autotest.sh@398 -- # hostname 00:38:47.873 08:36:01 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:38:47.873 geninfo: WARNING: invalid characters removed from testname! 00:39:09.816 08:36:22 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:11.194 08:36:24 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:13.227 08:36:26 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:15.134 08:36:28 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:16.509 08:36:30 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:18.408 08:36:32 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:39:20.313 08:36:34 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:39:20.313 08:36:34 -- spdk/autorun.sh@1 -- $ timing_finish 00:39:20.313 08:36:34 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:39:20.313 08:36:34 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:39:20.313 08:36:34 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:39:20.313 08:36:34 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:39:20.313 + [[ -n 1420824 ]] 00:39:20.313 + sudo kill 1420824 00:39:20.323 [Pipeline] } 00:39:20.338 [Pipeline] // stage 00:39:20.344 [Pipeline] } 00:39:20.358 [Pipeline] // timeout 00:39:20.363 [Pipeline] } 00:39:20.377 [Pipeline] // catchError 00:39:20.382 [Pipeline] } 00:39:20.396 [Pipeline] // wrap 00:39:20.402 [Pipeline] } 00:39:20.416 [Pipeline] // catchError 00:39:20.425 [Pipeline] stage 00:39:20.427 [Pipeline] { (Epilogue) 00:39:20.440 [Pipeline] catchError 00:39:20.441 [Pipeline] { 00:39:20.453 [Pipeline] echo 00:39:20.455 Cleanup processes 00:39:20.462 [Pipeline] sh 00:39:20.747 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:20.747 1995253 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:20.761 [Pipeline] sh 00:39:21.045 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:39:21.045 ++ grep -v 'sudo pgrep' 00:39:21.045 ++ awk '{print $1}' 00:39:21.045 + sudo kill -9 00:39:21.045 + true 00:39:21.057 [Pipeline] sh 00:39:21.341 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:39:33.585 [Pipeline] sh 00:39:33.867 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:39:33.867 Artifacts sizes are good 00:39:33.881 [Pipeline] archiveArtifacts 00:39:33.888 Archiving artifacts 00:39:34.015 [Pipeline] sh 00:39:34.304 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:39:34.318 [Pipeline] cleanWs 00:39:34.329 [WS-CLEANUP] Deleting project workspace... 00:39:34.329 [WS-CLEANUP] Deferred wipeout is used... 00:39:34.336 [WS-CLEANUP] done 00:39:34.339 [Pipeline] } 00:39:34.359 [Pipeline] // catchError 00:39:34.372 [Pipeline] sh 00:39:34.658 + logger -p user.info -t JENKINS-CI 00:39:34.667 [Pipeline] } 00:39:34.680 [Pipeline] // stage 00:39:34.686 [Pipeline] } 00:39:34.702 [Pipeline] // node 00:39:34.708 [Pipeline] End of Pipeline 00:39:34.757 Finished: SUCCESS